Related skills
nosql grafana databricks airflow kafka📋 Description
- Architect distributed data processing systems for modularity and fault tolerance.
- Collaborate on architecture reviews to evolve the data platform for analytics/ML.
- Develop framework components for data quality, observability, and recovery.
- Standardize incremental data processing patterns for idempotency and scale.
- Design schema evolution, metadata processing, and lineage for auditability.
- Optimize storage formats, partitioning, and compute efficiency at scale.
🎯 Requirements
- Bachelor’s degree or US equivalent in CS/Stats/IS or related; 5+ years data engineering exp.
- 4+ years designing/building large-scale data pipelines.
- 4+ years Spark, Kafka, PySpark, Databricks.
- 4+ years with Airflow and Spark Streaming.
- 4+ years Python; CI/CD (Jenkins).
- 4+ years Grafana dashboards for monitoring; data quality/ observability.
🎁 Benefits
- Salary: $147,000 - $185,000/year.
- Competitive benefits package for full-time employees.
- On-site in Lehi, Utah.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!