- Kuala Lumpur, Malaysia
- Permanent, Full time
- 19 Feb 19
Join Accenture and help transform leading organizations and communities around the world. The sheer scale of our capabilities and client engagements and the way we collaborate, operate and deliver value provides an unparalleled opportunity to grow and advance. Choose Accenture, and make delivering innovative work part of your extraordinary career.
People in our Client & Market career track drive profitable growth by developing market-relevant insights to increase market share or create new markets. They progress through required promotion into market-facing roles that have a direct impact on sales.
Job Role / Responsibilities:
- Takes charge of data analysis and modeling projects. Scope includes project design, business review meetings with external clients - obtain requirements/deliverables, receiving and processing of data, performing analyses and modeling, preparing final reports/presentations, communication of results and implementation support.
- Demonstrates to stakeholders the business value of implementing analytics. Provides technical support, such as needs assessments, scoping and preparation/presentation of proposals.
- Employ advanced statistical methods to create high-performing predictive models and innovative analyses to address business objectives. Tests new statistical analysis methods, software and data sources to continually improve quantitative solutions.
- Data wrangling/data matching/ETL techniques. Work with a variety of data sources, perform summary analyses and prepare modeling datasets. Deploy analytical solutions in production systems.
- Communicates clearly on product design, data specification, model implementations. Conveys project/test results, opportunities and questions to clients effectively. Resolves problems and works towards timely and high-quality project completion.
- Serve as the data science subject matter expert in meetings with internal stakeholders and external vendors. Actively involved in proof of concepts with new data, software and technologies.
Job Role / Responsibilities
- Experience working with large data sets, including statistical analyses, data visualization, data mining, data cleansing/transformation and machine learning.
- Good knowledge of AI/ML models, software, and tools with the ability to conceptualize and architect the key components of AI/ML projects; and to develop prototypes using statistical software packages such as R/SAS/SPSS.
- Experience in developing end-to-end machine learning solutions from data exploration, feature engineering, model building, performance evaluation to online testing with large data sets.
Qualifications / Experiences:
- Advanced degree with concentration in a quantitative discipline. Relevant fields include statistics, computer science, mathematics, economics.
- Preferred 2-3 years of relevant industry experience OR Ph.D. with concentration in similar fields.
- Deep expertise in statistical modeling techniques such as linear regression, logistic regression, tree models (Random Forests and GBM), GLM, cluster analysis, feature creation, principal components and validation. Strong expertise in regularization techniques (Ridge, Lasso, elastic nets), variable selection techniques, feature creation (transformation, binning, high level categorical reduction, etc.) and validation (hold-outs, CV, bootstrap).
- Experience in database systems ( Hadoop, Oracle, etc.), ETL/data lineage software (Talend, Informatica), data modeling and governance.
- Worked with data visualization (e.g. R Shiny, Spotfire, Tableau, Qlik)
- Able to create effective PowerPoint presentations
Qualifications / Experiences:
- Bachelor's Degree in a relevant field like statistics, computer science or applied math, physics or 4+ years of experience in computer science, applied mathematics, or other quantitative/computational discipline
- 1- 2 years of experience working with machine learning libraries: Python/scikit-learn etc
- 1- 2 years of experience with scripting languages (e.g. R, Python, Perl, SQL, Bash) for orchestration and data manipulation
- 1- 2 years of experience working with SQL and e manipulating structured and unstructured data sources for analysis.
- Heterogeneous database exp: HBase, Mongo, Cassandra, MySQL, SQL Server, PostgreSQL, NoSQL, Hadoop
- Cloudera: HBase, Spark, Hue, Hive, Pig
- Continuous integration: GitHub, Jenkins
- Learning lib exp: SCIKIT-learn, Vowpal Wabbit, H2O.ai
- Machine Learning technologies: Tensorflow (GPU), Theano, Keras
- Others: Angular JS, Cordova
- At least 5 years developing quantitative models and data analysis, with depth in a practice area within Operations Research and Applied Statistics/Mathematics domains.