Senior DevOps Engineer - Consumer Media (Contract) Senior DevOps Engineer - Consumer Media (Contract) …

Bloomberg
in New York, NY
Permanent, Full time
Be the first to apply
Competitive
Bloomberg
in New York, NY
Permanent, Full time
Be the first to apply
Competitive
Senior DevOps Engineer - Consumer Media (Contract)
**Please note that this position is a contract position**

Our Team:
Bloomberg Media empowers global business leaders with breaking news, expert opinion and proprietary data distributed on every platform, across every time zone reaching over 80 million unique visitors a month through its digital properties. Our applications are sophisticated, built using the latest technologies, and require modern scalable infrastructure with a high degree of automation in order to run efficiently. 
Media Data Science and Engineering is responsible for the real-time services which power all news on Bloomberg.com, serving complex publishing workflows and handling 10s of thousands of content queries a minute. We also maintain a data pipeline comprising dozens of ETL jobs that aggregate datasets in our Data Lake, and using a mix of open-source and public cloud technologies we provide a consistent query interface to over 100s of terabytes of data. This data empowers teams of analysts and data scientists to improve our customers' experiences through machine learning, A/B testing, and data-driven decision making. Finally, we support product managers, marketers, and analysts with data reporting tools and Quorum, our internal customer data platform.
At the heart of all these is critical cloud infrastructure and a need for modern DevOps practices to manage them. It's important for us to be scalable in how we provision, manage, and maintain all this infrastructure while continuing to develop our existing and future projects, so we use standard infrastructure automation tooling paired with a custom abstraction layer around multi-cloud and project templating.

What's in it for you:
You will work closely with the infrastructure teams to implement best practices and build automation to run the team's applications efficiently. You will have the opportunity to develop best practices, tools and processes to fundamentally change how we manage our infrastructure. We are in a hybrid cloud environment where applications run on either the public cloud or our internal cloud. You will work with the cloud and the infrastructure teams to migrate applications and backends to the public cloud and build abstractions to allow us to seamlessly run these in a multi-cloud environment.
 
We'll trust you to:
  • Handle the version upgrades and migrations of our Airflow and Elasticsearch infrastructure
  • Develop on and expand our multi-cloud infrastructure-as-code solution to enable GCP as a provider, for use by our Elasticsearch stack
  • Work with our infrastructure and public cloud engineering teams to migrate our web news application to AWS and GCP
  • Ensure that all these infrastructure upgrades and migrations are done transparently with respect to existing expectations around availability, throughput, and latency

You need to have:
  • 3+ years of experience working on highly available, fault-tolerant distributed systems
  • 2+ years of experience with infrastructure automation tools such as Terraform, Bosh, Chef, Capistrano
  • 2+ years of experience with the public cloud infrastructure (AWS, Azure, Google)
  • 2+ years of experience in at least one of the following programming languages: Ruby, JavaScript, Java or Python
  • A strong understanding of operating systems and the nuances of Linux
  • A strong understanding of networking fundamentals including DNS, load balancing, proxies and firewalls
  • Knowledge of network and application performance analysis using standard UNIX tools
  • A solid understanding of the modern software development lifecycle (SDLC) processes such as Continuous Integration and Delivery
  • BA, BS, MS, PhD in Computer Science, Engineering or related technology field

We'd love to see:
  • 2+ years of experience managing web scale infrastructure using modern technologies and DevOps principles
  • Expertise in Kubernetes, both as a client as well as owner of the platform
  • Expertise in Docker or other containerization technologies
  • Expertise in analyzing and troubleshooting large-scale distributed systems
  • Knowledge of Elasticsearch for distributed search and Airflow for job scheduling and monitoring
  • Experience with Jenkins and Jenkins Pipelines
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Bloomberg logo
More Jobs Like This
See more jobs
Close
Loading...
Loading...