We are one of the largest investment management organizations in the world, with over 1000 people working together to create long-term value.
The Technology Group manages and exploits information technologies to enhance GIC's ability to be the leading global long-term investment firm. It aims to provide users with empowering and transformational capabilities, and to create an inclusive, innovative and integrated work environment.
We are looking for a dynamic, self-motivated and technically competent individual to join us as Associate/AVP, Data Engineer. You'll be specialising in the data domain, involve in a high-pace data engineering team that is delivering and supporting GIC data needs. Responsibilities
- Work closely with data analysts and business end-users to implement and support data platforms using best-of-breed technology and methodology.
- Conduct requirement workshop with stakeholders and analyse requirements holistically.
- Design robust and scalable solutions to meet business needs and takes operational considerations into account. Demonstrate technical expertise in the assigned area.
- Analyse, tackle and resolve day-to-day operational incidents and advisory to business users
- Analyse systems operations data (SLAs, customer satisfaction, delivery quality, team efficiency etc.) to identify actionable trends for continual improvements.
- Play an active role in the project coordinating between internal resources and third parties/vendors for project execution.
- Bachelor's degree in Computer Science, Computer Engineering or equivalent
- At least 2 years of relevant working experience in data modelling and data integration, preferably in an investment and banking environment.
- Familiar with enterprise databases using database technologies (PL/SQL, SQL)
- Good knowledge of Linux family of OS
- Exposure and knowledge in any of the following technologies is advantageous:
> Informatica PowerCenter, Big Data Management (BDM), Enterprise Data Catalogue (EDC)
- Big Data
> Hadoop Technologies: HDFS, Zookeeper, Yarn, Spark, Hive, Impala, Sqoop, Solr, ELK, Flume, Kafka
> Hadoop Platforms: Cloudera, Databricks
> NoSQL Databases: Neo4J
> Cloud based Big Data Services: AWS EMR, Azure, HDInsight
> Elastic Search
- Programming and Scripting Language
> Shell Script
> RESTful Data API
- Data Visualisation
- Experienced with the Systems Development Life Cycle implementation methodology (SDLC) and/or agile methodologies like Scrum and Kanban.
- Good team player, with strong analytical skills and enjoy complex problem solving with innovative ideas
- Strong communication/people skills required to interact with data analysts, business end-users and vendors to design and develop solutions
- Passion for data and technology, detail oriented and meticulous with operations.