- Charlotte, NC, USA
- Self Employed, Full time
- Brighthouse Financial, Inc.
- 16 Jan 19
Brighthouse Financial is a new company established by MetLife. We're on a mission to help people achieve financial security. Built on a foundation of industry knowledge and experience, we specialize in offering essential annuity and life insurance products designed to help customers protect what they've earned and ensure it lasts more predictably. In an industry that often has a reputation for complexity, confusion, and cost, Brighthouse Financial is different. Our approach includes simplicity, transparency, and more value so customers can face the future with confidence.
Brighthouse Financial is seeking passionate, high-performing team members to help us carry out our mission and be part of an exciting journey toward improving the financial futures of our millions of customers. Sound like you? Read on.
Role Value Proposition:
The Data & Analytics have a mission to use big, alternative and directly sourced data to describe and predict key metrics of companies and economies so that we may deliver innovative data solutions and insights to our internal analyst teams and the firm's clients. We are looking for self-starting and innovative data engineers to join and grow our team. You must have a keen interest in financial markets and experience with modern tools and methods, including processing data at large scale for analytical purposes.
- Build data pipeline frameworks to automate high-volume and real-time data delivery for our Hadoop and research data hub
- Build data APIs and data delivery services that support critical operational and analytical applications for our internal business operations, customers and partners
- Transform complex analytical models into scalable, production-ready solutions
- Continuously integrate and ship code into on premise and cloud Production environments
- Develop applications from ground up using a modern technology stack such as Scala, Spark, Postgres, Angular JS, and NoSQL
- Develop sustainable data driven solutions with current new gen data technologies to meet the needs of our organization and business Customers
- Grasp new technologies rapidly as needed to progress varied initiatives
- Break down data issues and resolve them
- Build robust systems with an eye on the long-term maintenance and support of the application
- Leverage reusable code modules to solve problems across the team and organization
- Utilize a working knowledge of multiple development languages
Essential Business Experience and Technical Skills:
- Master's Degree and or equivalent work experience.
- Expert level experience in designing, building and managing applications to process large amounts of data in a Hadoop ecosystem
- Extensive experience with performance tuning applications on Hadoop and configuring Hadoop systems to maximise performance
- Experience building systems to perform real-time data processing using Spark Streaming and Kafka, or similar technologies
- Experience with common SDLC, including SCM, build tools, unit testing, TDD/BDD, continuous delivery and agile practices
- Experience working in large-scale multi tenancy Hadoop environments;Hadoop, HDFS, AVRO, MongoDB, or Zookeeper
- Strong software development experience in Scala and Python programing languages; other functional languages
- Experience with Unix-based systems, including bash programming
- Experience with other distributed technologies such as Cassandra, Solr/ElasticSearch, Flink, Flume would also be desirable