Added
1 day ago
Type
Full time
Salary
Upgrade to Premium to se...
Related skills
redshift aws sql python databricksπ Description
- Be SME on critical datasets; explain lineage and help partners interpret data.
- Own end-to-end data products from raw data to analytics-ready datasets.
- Architect scalable Databricks pipelines with Spark + SQL (batch/streaming).
- Design lakehouse/warehouse data models; define entities, grains, metrics.
- Integrate diverse data sources; manage schema evolution; deliver curated datasets.
- Establish data quality standards; monitoring, SLAs, and incident response.
π― Requirements
- 12+ years in data engineering or software development.
- Python, Spark, and SQL expert.
- BI tools: Tableau, Looker, Preset.
- Databricks, Airflow, Talend for data pipelines.
- Streaming OLAP (Druid/ClickHouse); AWS stack (EMR/Redshift/Kinesis).
- Bachelor's in CS/IS; real-time systems, AI/ML personalization; CDPs/DMPs; security.
π Benefits
- Performance bonus potential.
- Flexible time off.
- Comprehensive medical, dental, vision, STD, LTD, and life insurance.
- HSA program.
- Health care and dependent care FSA.
- 401(k) with employer match; commuter benefit; parent support; pet-friendly offices.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!