Senior Associate / Associate, Data Engineer, Group Consumer Banking and Big Data Analytics Technology, Technology & Operations
Business Function Group Technology and Operations (T&O) enables and empowers the bank with an efficient, nimble and resilient infrastructure through a strategic focus on productivity, quality & control, technology, people capability and innovation. In Group T&O, we manage the majority of the Bank's operational processes and inspire to delight our business partners through our multiple banking delivery channels.
As a Data Engineer, you'll be a Data Engineering Specialist and help us discover the information hidden in vast amounts of data. You'll be part of the team which builds a hybrid cloud data platform that supports dynamical data replication across different environments and leverages cloud-native technologies.
Responsibilities - Design and implement key components for highly scalable, distributed data collection and analysis system built for handling petabytes of data in the cloud and on-premises.
- Work with architects of the analytics system and help in adopting best practices in backend infrastructure and distributed computing.
- Implement core practice of Agile, leveraging cloud-native architecture patterns and using Test Driven Development, continuous integration/continuous delivery
- Continuously discover, evaluate and implement new cloud technologies to maximize analytical system performance
Requirements - 3+ years of experience in one or more areas of big data and/or cloud0native application development
- Development experience in Java and pride in producing clean, maintainable code. Knowledge of reactive programming and related frameworks would be an advantage.
- Experience using AWS/GCP managed services (especially those related to data processing and data movement)
- Hands-on experience with Test Driven Development methodology
- Familiarity with operational technologies, including k8s, docker, ZooKeeper,
- Experience with Spark and different schema formats (Avro, Parquet, Carbondata)
- Experience using high-throughput, distributed message queueing systems such as Kafka.
- Familiarity with tracing/observability solutions, e.g. OpenTracing, OpenTelemetry, Zipkin
- An ability to periodically deploy systems to on-prem/cloud environments
- Mastery of key development tools such as GIT, and familiarity with collaboration tools such as Jira and Confluence or similar tools.
- Experience with distributed databases, such as Cassandra, and the key issues affecting their performance and reliability.
- Experience with SQL engines (e.g. MySQL, PostgreSQL)
- The ability to work with loosely defined requirements, and exercise your analytical skills to clarify questions, share your approach and build/test elegant solutions in weekly sprint/release cycles.
Apply Now We offer a competitive salary and benefits package and the professional advantages of a dynamic environment that supports your development and recognises your achievements.