Director - Big Data Infrastructure & Platform Engineer

  • Competitive
  • Singapore
  • Permanent, Full time
  • TD Securities
  • 15 Jan 18 2018-01-15

The Senior Platform Engineer is part of the Infrastructure and Platform Design team within Enterprise Information Management.

Reporting to the Senior Manager; this role is responsible for designing, building and testing an automated and resilient Big Data infrastructure and platform for the Information Excellence program.  This position must work proactively and effectively within EIM, ITS and other technology and business partners to provide technical direction, support, expertise and best practices in the systems and infrastructure that encompass the Information Excellence platform.

• Provide technical design leadership and oversight to the EIM Infrastructure & Platform Design team.
• Be positioned to lead the future direction of platform design and implementation patterns for new and emerging technology projects within the Platform Team
• Leads POC/Design sessions & provides detailed template guidelines for junior engineers to follow.
• Contributes to the development of the Hadoop platform's technical design and capability roadmap, including clearly documenting the various interdependencies to ensure all platform users can design and implement components/processes and capabilities in a seamless and expedient manner
• Responsible for analyzing business requirements and recommending optimal solutions within technology architecture.
• Provide deep subject matter expertise and define future direction in one or more technical areas or lead strategic development/engineering efforts on new or emerging technology projects to meet business needs
• Provide guidance to all delivery teams, ensuring all physical designs meet the IPD team's strict fault-tolerant, scalable guidelines.
• Performs regular and frequent Infrastructure risk assessment and proactively addresses risks.
• Accountability for platform performance and recommendations for platform performance tuning to meet the various delivery teams' non-functional requirements
• Post-secondary degree: Computer Science, Engineering or similar degree preferred.
• A minimum of 5 to 8 years of experience in system administration, information management, system automation and testing.
 A minimum of 3 year of Big Data and Hadoop experience preferred or strong proficiency in Linux shell scripting and system administration.
• Experience with information technology; data and systems management; expert knowledge of Unix/Linux specifically RHEL: Hadoop administration and utilities, Java, virtual environments, configuration and deployment automation; and knowledge of RESTful API-based web services is preferred but not mandatory.
• Demonstrated history of being self-motivated, energetic, results-driven, and executing with excellence
• Effective inter-personal skills working well with a fast moving team; able to build and maintain strong relationships with business and technology partners
• Demonstrated ability to work and deliver on multiple complex projects on time.
• Strong understanding of Hadoop tools and utilities (HDFS, Pig, Hive, MapReduce, Sqoop, Flume, Spark, Kafka) and CDH.
• Strong understanding of Linux/Unix, especially RHEL.
• Familiarity with using orchestration systems, and automation tools such as Puppet, Chef, Ansible or Saltstack.
• Working experience using a scripting language such as Bash, Python or Perl.
• Strong understanding of application architect and design patterns.
• Demonstrates a strong understanding of TCP/IP, DNS, common networking ports and protocols, traffic flow, system administration and common cybersecurity elements.
• Knowledgeable with networking, firewalls and load balancing.
• Experience with Cloud infrastructure and Virtual Environments: KVM, Docker or Kubernetes. 
• Ability to debug/trace Java or Scala code an asset
• Good understanding and experience on systems automation, scheduling, agile code promotion, system access and proactive system management (DevOps).
• Familiarity with orchestration workflows and high-level configuration management concepts and implementations.
• Capable of synthesizing an architecture from a requirements specification, industry best practices, and vendor whitepapers / artifacts
• Knowledge of source code repository systems and data lineage standards. In addition, ability to use revision control systems such as Git.
• Experience using RESTful API-based web services and applications.
• Database experience with MySQL, PostgreSQL, DB2 or Oracle.
• Excellent teamwork and interpersonal skills
• Professional oral and written communication skills