Related skills
redshift aws sql python databricksπ Description
- Build and strengthen data engineering platforms globally.
- Architect scalable pipelines on Databricks using Spark + SQL.
- Design lakehouse/warehouse models for analytics and self-service.
- Own data products end-to-end from sources to analytics datasets.
- Establish data quality standards, monitoring, and incident response.
- Partner with Product, Eng, Growth, and Analytics to deliver data deliverables.
π― Requirements
- 12+ years in data engineering or software development.
- Proficient in Python, Spark, and SQL.
- Strong experience with Databricks, Airflow and ETL tooling.
- Experience with AWS services including EMR Spark, Redshift, Kinesis, Lambda, Glue, S3, Athena.
- Experience with streaming OLAP engines such as Druid or ClickHouse.
- Familiarity with CDPs and DMPs and data privacy.
- Bachelor's degree in CS or Information Systems.
π Benefits
- Flexible time off policies
- Health, dental, vision, STD, LTD, life insurance
- Health Savings Account and FSA
- 401(k) plan with employer match
- Employer paid commuter benefit
- Parental leave support
- Pet insurance and pet-friendly offices
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!