Related skills
bigquery docker snowflake sql python📋 Description
- Architect, implement, and maintain scalable data pipelines and feature stores for batch and real-time workloads.
- Build reproducible ML training, evaluation, and inference workflows using modern orchestration and MLOps tooling.
- Integrate event streams from Twilio products into analytics-ready datasets.
- Monitor, test, and improve data quality, model performance, latency, and cost.
- Partner with product, data science, and security teams to ship resilient, compliant services.
- Automate deployment with CI/CD, infrastructure-as-code, and container orchestration best practices.
🎯 Requirements
- B.S. in Computer Science, Data Engineering, Electrical Engineering, Mathematics, or related field—or equivalent practical experience.
- 3–5 years building and operating data or ML systems in production.
- Proficient in Python and SQL; comfortable with software engineering fundamentals (testing, version control, code reviews).
- Hands-on experience with ETL/ELT orchestration tools (Airflow, Dagster) and cloud data warehouses (Snowflake, BigQuery, or Redshift).
- Familiarity with ML lifecycle tooling such as MLflow, SageMaker, Vertex AI, or similar.
- Working knowledge of Docker and Kubernetes and at least one major cloud platform (AWS, GCP, or Azure).
🎁 Benefits
- Competitive pay and generous time off
- Healthcare, retirement savings, parental and wellness leave
- Equity plan eligibility and annual bonus plan
- Remote-first culture with global impact
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!