Here at S&P Global (formerly IHS Markit) we are looking for an adept, action-oriented Senior Data Engineer to build out a multi-tenant data mesh to enable our soon-to-be-launched digital transformation product which uses advanced NLP, knowledge engineering, and ML to accelerate innovation in engineering, manufacturing, and scientific operations. The perfect candidates will have strong data infrastructure and data architecture skills, a proven track record of collaborating and iteratively implementing data-intensive solutions, strong operational skills to drive efficiency and speed, strong project leadership, and a strong vision for how data engineering can proactively create positive impact for companies. You will be a part of an early-stage team. You will educate stakeholders, mentor team members, and have a significant stake in defining the future of the Data Engineering function for the product. Job Responsibilities
- Collaborate to design, build, and maintain a multi-tenant Data Mesh within the AWS cloud comprised of Data Lakes, Warehouses, Streaming, Graphs, and analytical NoSQL stores
- Drive adoption and standardization of data governance, lineage, cataloging, and stewardship practices across teams
- Work closely with data scientists, micro-service developers, and security experts to build out a big data platform incrementally and securely
- Work closely with the product management and development teams to rapidly translate the understanding of customer data and requirements to product and solutions
- Maintain an excellent understanding of the business's long-term goals and strategy and ensures that the design and architecture are aligned with these
- Define and manage SLA's for data sets and processes running in production
- Design for disaster recovery balancing availability and consistency in multi-region scenarios
- Research and experiment with emerging technologies and tools related to big data
- Establish and reinforce disciplined software engineering processes and best-practices
Nice to Have's
- Comfort and ideally substantial experience operating big data infrastructure in a cloud-based ecosystem (AWS preferred)
- Deep understanding of the theoretical and practical tradeoffs of various data formats in object/file stores (Parquet, Avro, JSON, etc.) in combination with a variety of ETL tools (Spark, Presto, etc.)
- Deep understanding of the theoretical and practical tradeoffs of various NoSQL stores (Cassandra, Elasticsearch, DynamoDB, etc.) with respect to different read/write patterns and availability/consistency requirements
- Mastery of operating and designing stream-based data systems (Kafka, AWS Kinesis, GCP PusSub, etc.) particularly under varying load
- Be proficient in modern big data architectural approaches (Kappa/Lambda architectures, Data Lake Zones, etc.)
- Experience with data governance, lineage, cataloging tooling (Apache Atlas, Apache Ranger, AWS Glue Catalog, etc.)
- Experience with data pipeline and workflow management tools (AWS Data Pipeline, Apache Airflow, Argo, etc.)
- Experience with stream-processing systems (ksqlDB, Spark Streaming, Apache Beam/Flink, etc.)
- Experience with software engineering standard methodologies (unit testing, code reviews, design document, continuous delivery)
- Develop and deploy production-grade services, SDK's, and data infrastructure emphasizing performance, scalability, and self-service.
- Ability to conceptualize and articulate ideas clearly and concisely
- Entrepreneurial or intrapreneurial experience where you helped lead the creation of a new product & organization
- Strong algorithms, data structures, and coding background with either Java, Python or Scala programming experience
- Experience working with knowledge graphs stores (Stardog, TigerGraph, Ontotext GraphDB, Neo4j) and surrounding semantic technology (OWL, RDF, SWRL, SPARQL, JSON-LD)
- Experience working with Snowflake data warehouses and dimensional modeling practices
- BA/BS or Masters in Computer Science, Math, Physics, or other technical fields
- Experience with at least 10+ terabyte datasets, ideally up to multiple petabytes
We're building a software solution that connects data in revolutionary ways, illuminating answers that were previously impossible to find and empowering our clients to envision the future so they can determine the best course of action in the present. Join us! What We Offer
- Competitive base salary and bonus
- A comprehensive, benefits package that includes medical, dental, vision and life insurance plans, paid time off, a generous 401k match with no vesting period, parental leave and 3 volunteering days each year. For more information on benefits, please access the benefits page on our careers site: https://careers.ihsmarkit.com/benefits.php .
For work locations in the state of Colorado, the anticipated minimum base salary for this role would be $140,000 - $239,000
. Compensation will be determined by the education, experience, knowledge, and abilities of the applicant.
----------------------------------------------- Equal Opportunity Employer:
S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment.
If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com
and your request will be forwarded to the appropriate person. US Candidates Only:
The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law.