Data Science Engineer
Who we're looking for
Data Science Engineer to build data science platform to underpin variety of statistical and machine learning workflows and set of capabilities for rigorous and efficient discovery of data and model-driven insights to support critical investment decisions. About Schroders
We're a global investment manager. We help institutions, intermediaries and individuals around the world invest money to meet their goals, fulfil their ambitions, and prepare for the future.
We have around 5,000 people on six continents. And we've been around for over 200 years, but keep adapting as society and technology changes. What doesn't change is our commitment to helping our clients, and society, prosper. Technology at Schroders
There's a huge amount of change going on at Schroders. Technology's shaping our business more and more, so there are many opportunities waiting to be grabbed. And because we're a big financial player, we can put hefty backing behind good ideas.
We're a serious business - we have enormous responsibilities to our clients and shareholders. But just because we're suited and booted, that doesn't make us stuffy; our tech teams are friendlier and more informal than you might expect. The base
We moved into our new HQ in the City of London in 2018. We're close to our clients, in the heart of the UK's financial centre. And we have everything we need to work flexibly. The team
The Data Insights Unit (DIU)'s mission is to bring scientific rigour to all business decisions in Schroders. In essence we do this by:
1. making available alternative data sources,
2. unlocking the value in data by providing a research service, answering business questions by analysing these datasets,
3. scaling the value in data by building Insight Products: generalising those analyses or anticipating those questions by alerting people to relevant changes before they know to ask.
Through all these we use specialist Data Science tools and techniques: cloud technologies, machine learning, statistical techniques, and insights from the world of behavioural science.
The quantity of information available for investment research purposes is increasing at such a rate that traditional industry practices and skillsets are unable to absorb and process it. Global trends in digitalisation, social media, open data and technology are all creating vast streams of alternative data that are often highly unstructured and obscure. However, they contain valuable and often rare insights. The DIU aims to find these new and potentially unorthodox datasets, extract the rich, hidden information they contain and use their expertise to improve traditional fundamental research.
Data Science Engineering team supports those goals by providing Data Science Platform and capabilities to delivery full life-cycle of an Insight Product - this includes insight discovery and development, its management as well as consumption. In addition to platform and shared services we also contribute directly to product development by embedding withing cross functional product teams. What you'll do
The knowledge, experience and qualifications you need
- Designing, developing and delivering Data Science Platform and associated capabilities
- Promoting, implementing and delivering tools for proper platform and data science engineering practices, approaches and technologies amongst platform team and product teams
- Collaborating with other delivery stakeholders (cloud infrastructure, data engineering, enterprise data) to identify and integrate shared components and capabilities (data access, data cataloguing, lineage tracking)
- Contributing to design, peer code reviews, delivery planning and preparation of releases for the platform and insight products
The knowledge, experience and qualifications that will help
- Experience designing, developing and delivering software products on cloud platforms (AWS preferably) for data science and machine learning workflows. Python as required language with tools such as jupyter, git, pytest with approaches such as infrastructure as code, serverless, containerisation.
- Developing data transformation and feature engineering workflows with best practices for data versioning, cataloguing, lineage tracking. Leveraging tools and frameworks such as: spark, pandas, dbt, airflow, dagster, prefect.
- Familiarity with agile practices and experience in product-oriented development
What you'll be like
- Development of infrastructural libraries and frameworks to support data discovery, transformation and rigorous statistical and machine learning model development and serving (preferably leveraging aws-based sagemaker ecosystem).
- Understanding and experience with development lifecycle of machine learning models - training, evaluation, monitoring and hosting with AWS-based tools such as sagemaker, step functions, aws glue.
- Understanding of traditional/statistical data science or bioinformatics workflows and techniques such as snakemake, scikit learn pipelines, tidyverse, mlflow, metaflow.
- Experience with variety of data storage and query engines (geospatial, timeseries, graph, textual, relational, object).
We're looking for the best, whoever they are
- Able to design and deliver platform capabilities and features
- Pragmatic, willing to take localized action yet understanding bigger picture.
- Comfortable with ambiguity but taking initiative towards reducing it.
- Comfortable with listening to and understanding different needs of stakeholders and users (data scientists, analysts and engineers) of the platform, yet being able to balance, and communicate shared needs
Schroders is an equal opportunities employer. You're welcome here whatever your socio-economic background, race, sex, gender identity, sexual orientation, religious belief, age or disability.