Big Data Engineer Big Data Engineer …

Credit Suisse
in Pune, Bangarde, India
Permanent, Full time
Be the first to apply
Credit Suisse
in Pune, Bangarde, India
Permanent, Full time
Be the first to apply
Credit Suisse
Big Data Engineer
We Offer
We are seeking hardworking, experienced big data engineer to join a growing, high-visibility cross-Bank team that is developing and deploying solutions to some of Credit Suisse's most challenging analytical problems. As a member of this team, you will work with clients and data spanning Credit Suisse's global organization to solve emerging most important challenges via the utilization of new technologies such as:

  • Distributed file systems and storage technologies (HDFS, HBase, Hive, Kudu)
  • Large-scale distributed data analytic platforms and compute environments (Spark, Map/Reduce)
  • Streaming technologies such as Kafka, Flume
  • The team of engineers is multi-skilled and performs architecture, development, DevOps, tools integration and data analytics pipeline setup and tuning. This is an agile environment using the latest toolsets and practices. Typically an engineer in the team collaborates with engineers in other location but also groups within CS (eg. Infrastructure, Trading, Compliance Officers, Data Scientist) to ensure we can extract meaning value and insights from all of our internal data.
  • High degree of collaboration between IT and Business
  • Engineering environment where we have fun delivering high quality
  • High visibility by Senior Management to deliver on time

You Offer

  • Experienced level knowledge of Cloudera Hadoop components such as HDFS, Sentry, HBase, Impala, Hue, Spark, Hive, Kafka, YARN, ZooKeeper and Postgres
  • Experienced level proficiency in Python, Ansible, Salt and dev ops technologies
  • Expert in Linux and BASH scripting. An understanding of Unix/Linux including system administration
  • Work in teams and collaborate with others to clarify requirements
  • Assist in documenting requirements as well as resolve conflicts or ambiguities
  • Tune Hadoop solutions to improve performance and end-user experience
  • Maintenance, operations and support of the platform across multiple tenants. Integrating with existing Strategic Big Data Platform components (Cloudera HDFS with Spark, Python, Hive, Impala, Kudu, Kafka). Dev, UAT and Production support experience
  • Service Management i.e. Release management, Management Change, Incident management
  • Experience with deployment toolsets like Odyssey, SVN, Git, Tableau, Nolio, Pentaho, Ansible, Salt Stack
  • Loading data from different datasets and deciding on which file format is efficient for a task. Source large volumes of data from diverse data platforms into Hadoop platform
  • You have deep understanding in requirements of recommendations to output transformations
  • Define Hadoop Job Flows. Translate complex functional and technical requirements into detailed design
  • You can build distributed, reliable and scalable data pipelines to ingest and process data in real-time. Hadoop developer deals with fetching impression streams, transaction behaviors, clickstream data and other unstructured data
  • Fine tune Hadoop applications for high performance and throughput. Solve and debug any Hadoop ecosystem run time issues
  • You have strong communication skills and the ability to present deep technical findings to a business audience
  • You have 10+ years' experience in enterprise IT at a global organization. Recent 2+ years in Big Data ecosystem in a financial firm
  • You are fluent written and spoken English
  • You have good attention to detail around documentation and Process oriented
  • You are dedicated individual with a solution driven attitude
  • You are a strong team-player with good client facing skills