The Role Responsibilities
Work with stakeholders throughout the organization to identify opportunities where data can be used to drive business solutions.
Support the evaluation of various Machine Learning / Artificial Intelligence solutions and make recommendations.
Assess the effectiveness and accuracy of new data sources and data gathering techniques
Develop custom data models and algorithms to apply to data sets.
Develop testing frameworks to assess model quality and accuracy.
Develop model delivery framework supporting all required operational and regulatory controls
Develop processes and tools to monitor and analyse model performance and data accuracy.
Support agile teams in coming up with innovate solution design around the use of data Artificial Intelligence platforms no matter how large or small the problem
Contribute to and Ensure adherence to overall platform architecture and direction
Ensure teams follow and contribute to the development of robust DevOps practices like continuous integration, code scanning, automated testing, configuration management, release management, blue-green deployment, online collaboration, cloud hosting, and system monitoring with respect to AI/ML platforms.
Responsible for implementing Continuous Integration (CI) and Continuous Deployment (CD) pipelines within AI/ML platforms by working with Credit Origination Development team and SCB Enterprise DevOps team.
Our Ideal Candidate
Knowledge of the C3 AI/ML Platform and technologies
Strong experience in agile methodologies and test-driven development.
Strong problem-solving skills
Knowledge of using statistical languages (Python, R, SQL, etc) to manipulate data and draw insights from large data sets.
Knowledge of working with and creating data architectures.
Knowledge of a variety of machine learning techniques and concepts (clustering, decision tree learning, Neural Networks, etc)
Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.).
Knowledge of visualizing/presenting data for stakeholders.
We're looking for someone with 5-7 years of experience manipulating data sets and building statistical models, has a Master's or PHD in Statistics, Mathematics, Computer Science or another quantitative field, and is familiar with the following software/tools
Coding knowledge and experience with several languages: C, C++, Java,
Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, text mining, social network analysis, etc.
Experience using web services: Redshift, Amazon S3, Spark, etc.
Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modelling, clustering, decision trees, neural networks, etc.
Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Gurobi, MySQL, etc.
Experience with GUI frameworks like React, Angular, SWING etc
Strong knowledge in CI/CD toolset such as Ansible, Artifactory, Jenkins, BitBucket, SonarQube, Fortify, Flyway, Jira and Confluence.
Strong knowledge in automated testing toolset such as jUnit, Mockito, cucumber, Selenium etc.
Strong knowledge in scripting languages such as Groovy and Linux shell.
Knowledge in Integrated Development Environment (IDE) such as Eclipse and Intelli.
Proven ability to work within a team environment.
Highly effective verbal and written English communication skills.
Ability to make good / sound decisions and use independent judgement.
Strong reasoning, analytical and inter-personal skills.
Excellent attention to detail and time management.
Experience and knowledge of change management principles and methodologies
Good track record of communicating effectively with both business and IT staff at all levels.
Experience in promoting DevOps culture among developers and testers.
Good presentation skills.