Data Pipeline Engineer sought by leading New York-based financial services firm to support data-driven business decision making and migration and deployment to AWS Cloud platform. The role will work on data-driven projects that impact all aspects of the firm's operations and business units, including identifying prospective clients, identifying investment trends, process improvements, account measurement, and reporting efficiencies.
The role is part of a team that is responsible for data engineering work, building data pipelines, building API endpoints, and AWS Cloud Deployment.
- Work closely with Data Scientists
- Build data pipelines
- Build API endpoints
- Work on Cloud deployment
- Undergraduate degree in Analytics, Applied Mathematics, Computer Sciences, Statistics or related analytical field of study or equivalent combination of training and experience. Graduate degree preferred.
- Minimum of five years related work experience using modern data processing stack
- Experience building data pipelines, API endpoints, and Cloud Deployment
- Must have advanced Python, AWS, or Google Cloud, or Azure experience
- Proven knowledge of machine learning frameworks: Pytorch, Keras, Tensorflow, Scikit-Learn, Caffe/Caffe2, MXNet
- Proficiency with data pipelines, real-time data processing, data warehousing, NoSQL
- Strong business stakeholder management with a track record in applying advanced analytic solutions to solve real-world business problems.
- Nice to have: Machine Learning experience
Keywords: AWS, Kubernetes, Python, data engineer; data pipelines, Cloud Deployment, advanced analytics; machine learning, artificial intelligence; predictive modeling
Please send resumes to Jim Geiger firstname.lastname@example.org