Product Engineering - Hadoop Infrastructure

  • Competitive
  • New York, NY, USA New York NY US
  • Permanent, Full time
  • Morgan Stanley USA
  • 23 Apr 18 2018-04-23

Product Engineering - Hadoop Infrastructure

Company Profile
Morgan Stanley is a leading global financial services firm providing a wide range of investment banking, securities, investment management and wealth management services. The Firm's employees serve clients worldwide including corporations, governments and individuals from more than 1,200 offices in 43 countries.

As a market leader, the talent and passion of our people is critical to our success. Together, we share a common set of values rooted in integrity, excellence and strong team ethic. Morgan Stanley can provide a superior foundation for building a professional career - a place for people to learn, to achieve and grow. A philosophy that balances personal lifestyles, perspectives and needs is an important part of our culture.

Technology
Technology works as a strategic partner with Morgan Stanley business units and the world's leading technology companies to redefine how we do business in ever more global, complex, and dynamic financial markets. Morgan Stanley's sizeable investment in technology results in quantitative trading systems, cutting-edge modelling and simulation software, comprehensive risk and security systems, and robust client-relationship capabilities, plus the worldwide infrastructure that forms the backbone of these systems and tools. Our insights, our applications and infrastructure give a competitive edge to clients' businesses-and to our own.

The Hadoop Infrastructure team is responsible for developing, provisioning and managing Hadoop Ecosystem Infrastructure for the organization. With our diverse use cases, we do not use off-the-shelf management tools. Our custom developed Hadoop management infrastructure is built to fully automate deployment and operations. We are looking for an experienced infrastructure engineer who enjoys building large scale data systems and has previous knowledge of working with one of more Hadoop distribution systems.

Responsibilities:
- Evaluate Hadoop infrastructure requirements and design/deploy solutions (high availability, big data clusters, elastic load tolerance, etc.)
- Develop automation, installation and monitoring of Hadoop ecosystem components; specifically: Spark, HBase, HDFS, Map/Reduce, Yarn, Oozie, Pig, Hive, Impala, Kafka, Accumulo.
- Dig deep into performance, scalability, capacity and reliability problems to resolve issues
- Create productive models for integrating internal application teams and vendors with our infrastructure
- Troubleshoot and debug Hadoop ecosystem run-time issues
- Provide developer and operations documentation
- Run Proof of concept projects with our customersRequired
- Work closely with users, operations and other engineering groups such as unix, web, security etc.

Qualifications:

Skills:
- Proficiency in two or more of Java, Python, SQL, Scala, Perl, Bash
- Experience building/managing Unix hosted infrastructure within the enterprise or in the public cloud
- Experience with at least one configuration management and orchestration tool such as Chef, Puppet, or Ansible, etc.
- Experience writing software in a continuous build and automated deployment environment.
- Good verbal, written communication and presentation skills.

Highly desirable Skills:
- Experience with Hadoop eco-system
- Knowledge of working with Cloudera, MapR or HDP distribution of Hadoop.

Preferred Skills:-
Kerberos experience a plus - CM_API experience using python/java API