This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →
← Back to all jobs

M3 - DataOps & Framework Engineering Lead

Added
11 days ago
Location
Type
Full time
Salary
Not Specified

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Save job

Data

Ops

& Framework Engineering

Lead

 

Job Family:

Decision Sciences

>

Sub-family:

Dat

a Engineering

 

Report

s

to

(

role

):

Sr Data

Engineer

& Data

Platform

Manager

 

 

Objective of the

R

ole

 

The

DataOps

& Framework

Engineering Lead

is responsible for

leading the

DataOps

Engineering

Lead

and

Framework Team

.

This role aims to set the technical and operational standards that ensure

data

pipelines,

and the

data framework

are reliable, automated, observable, and scalable, thereby improving time to market

. Working primarily with

Databricks on AWS

, and optionally integrating with

GCP

and Azure

environments

,

you'll

help build a platform that powers analytics, business intelligence, and AI use cases across the company.

 

Main

Responsibilities

 

  • Lead and mentor

    Framework

    Engineering

    T

    eam, fostering a culture of accountability, continuous improvement, and technical excellence.

     

  • Define

    and share the best practices for

    implement CI/CD pipelines and automation

    using tools such as GitHub Actions

    , and

    Terraform

     

  • Define and share

    Framework Architecture

    , and Data Contract

    especifications

    .

     

  • Design and

    implementing

    o

    versee observability standards: logging, monitoring, alerting, and retries across the entire pipeline lifecycle.

     

  • Ensure alignment between

    DataOps

    - Framework Teams

    and other technical chapters (Engineering, Platform, Architecture, Security) to support cross-domain pipelines.

     

  • Collaborate

    with business stakeholders and tech leads to proactively

    manage

    delivery plans, risks, and dependencies.

     

  • Act as the technical authority for incident response, root cause analysis, and resilience strategies in production environments.

     

  • Promote infrastructure as code (

    IaC

    ) practices and drive automation across cloud environments.

     

  • Monitor resource usage and

    optimize

    cloud costs (Databricks clusters, compute, storage).

     

  • Facilitate team rituals (1:1s, planning, retros) and create career development opportunities for team members.

     

  • Promote an autonomous work culture by encouraging self-management, accountability, and proactive problem-solving among team members.

     

  • Serve as a Spin Culture Ambassador to foster and

    maintain

    a positive, inclusive, and dynamic work environment that aligns with the company's values and culture.

     

 

 

 

Required Knowledge

and Experienc

e

 

  • Minimum

    10+

    years

    leading

    DataOps

    and Framework developers

    , or DevOps, with at least

    5

    years

    in a technical leadership role overseeing and mentoring

    , DevOps,

    DataOps

    ,

    Data Engineers. Demonstrates experience in managing complex projects, coordinating team efforts, and ensuring alignment with organizational goals.

     

  • Advanced hands-on experience with

    Databricks

    , including Unity Catalog, Delta Live Tables, Job orchestration, and monitoring.

     

  • Solid experience in

    cloud platforms

    , especially

    AWS

    (S3, EC2, IAM, Glue).

     

  • Experience with

    CI/CD pipelines

    (GitHub Actions, GitLab CI), and orchestration frameworks (Airflow or similar).

     

  • Proficient in

    Python

    ,

    SQL

    , and scripting for automation and data operations.

     

  • Strong understanding of data pipeline architectures across batch, streaming, and real-time use cases.

     

  • Technical Skills:

    Proficiency

    in DevOps tools and technologies such as Jenkins, Docker, Kubernetes, Terraform, Ansible, and cloud platforms (

    e.g.

    Databricks, AWS, Azure, GCP).

     

  • Soft Skills: Strong leadership, communication, and collaboration skills. Excellent problem-solving abilities and a proactive approach to learning and innovation.

     

  • Experience implementing monitoring and data quality checks.

     

  • Effective communicator who can bridge technical and business needs.

     

  • Preferred Qualifications:

     

  • Experience with microservices architecture and containerization technologies.

     

  • Familiarity with ITIL or other IT service management frameworks.

     

  • Certification in cloud platforms or DevOps practices.

     

  • Experience working with Google Cloud Platform (GCP) services such as BigQuery, Cloud Functions, Pub/Sub, or Composer

    .

     

  • Fluent English

     

 

Spin está comprometida con un lugar de trabajo diverso e inclusivo. Somos un empleador que ofrece igualdad de oportunidades y no discrimina por motivos de raza, origen nacional, género, identidad de género, orientación sexual, discapacidad, edad u otra condición legalmente protegida. Si desea solicitar una adaptación, notifique a su Reclutador.

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to On site Engineering Jobs. Just set your preferences and Job Copilot will do the rest—finding, filtering, and applying while you focus on what matters.

Related Engineering Jobs

See more Engineering jobs →