Regional Data and Infrastructure Engineer Regional Data and Infrastructure Engineer …

CIMB Malaysia
in Kuala Lumpur, Kuala Lumpur, Malaysia
Permanent, Full time
Be the first to apply
Competitive
CIMB Malaysia
in Kuala Lumpur, Kuala Lumpur, Malaysia
Permanent, Full time
Be the first to apply
Competitive
Regional Data and Infrastructure Engineer
Strategy and Planning
  • Manage regional decision management team with data and infrastructure needs.
  • Design and Implement data warehouse for Decision Management (campaign and insights) across region
  • Responsible for supporting Big Data Platform for regional initiatives.
  • Responsible for Supporting Big Data Platform by providing R,Python Packages as per request plan on future upgrades and provide stable and secure platform to Data scientists.
  • Create Data model based on business requirement.
  • Create process to support key data initiatives and Modelling algorithm for in-house data scientist for the Group.
  • Develop ETL Jobs to load data into Big Data platform for country specific logicial partioning
  • Manage user access to the system with different grouping across region. Initiate all Infrastructure upgrades for Data platform including patches upgrade, enhancements and License renewals periodically.
  • Review and assess the current utilization and requirement for current infrastructure in terms of technology and capacity for Region
  • Liaise with different system owners to understand source system changes and plan downstream impact management


Business Performance and Management
  • Develop and implementation of Big Data products/services and marketing communications and/or campaigns to ensure that they are implemented effectively to its targeted customers and that targets and budgets are achieved.
  • Develops network and ensuring responsible communication with stakeholders to enhance investor relations in the local and international markets.
  • Role model and drive activities to encourage and build desired organisational culture, values and reputation in the Group markets and with all staff, customers and regulatory/official bodies.
  • Manage user access to the system. Initiate all Infrastructure upgrades for Big Data platform including patches upgrade, enhancements and License renewals periodically.
  • Point of contact with Oracle Support vendor for technical issues. Responsible for maintaining Big Data Lake.


People Management
  • Develop in-house capability for Data Engineering stream, train and reskill the existing resources.
  • Work across the Region (Malaysia, Singapore, Indonesia and Thailand) to champion and share Data Management best practice.


Regulatory Compliance
  • Ensures all Group Consumer operations are in compliance with Group, local and regional regulations.
  • Support Head of Customer Data and Infrastructure with Key Data governance issues.
  • Defining data retention policies and Security Matrix for users.


Qualifications
  • Candidate should have background in Bachelor's Degree or Professional Qualification Financial and Engineering or equivalent.
  • Candidate should have a minimum experience in the following:
  • 10 years of experience of relevant banking background in established bank / financial institutions
  • 5 years of prior experience in Data Engineering

Expected Competencies
  • Understand and translate business needs into data models supporting long-term solutions.
  • Create logical and physical data models using best practices to ensure high data quality and reduced redundancy.
  • Optimize and update logical and physical data models to support new and existing projects.
  • Define and design overall EDW architecture for regional Decision Management.
  • Experience working with large data sets in enterprise data warehousing environments
  • Proficiency programming in Java, Scala, R, Python, etc.
  • Experience with Cloudera Distribution of Hadoop System and Managing services using Cloudera Manager.
  • Candidate with the following skills and knowledge will have an added advantage:
    • Added Advantage for having experience in Abinitio, Oracle Big Data Appliance and ODI.
    • Proficiency with Hadoop v2, MapReduce, HDFS
  • Good knowledge of Big Data querying tools, such as Pig, Hive and Impala
  • Experience with integration of data from multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Experience with building stream-processing systems, using solutions such as Spark-Streaming, Storm or Kinesis
Close