Related skills
redshift aws s3 databricks airflow📋 Description
- Ownership and leadership to drive shared initiatives
- Design, build, and optimize scalable data pipelines with Databricks, Redshift, and AWS
- Develop feature stores for ML and analytics; integrate structured/unstructured data
- Automation and orchestration using Airflow and Databricks Workflows
- Architect data infrastructure on AWS; optimize query performance
- Collaborate with Data Scientists, ML Engineers, and stakeholders to drive decisions
🎯 Requirements
- 5+ years in data engineering; cloud data platforms
- Strong AWS (S3, RDS, Redshift, Glue, IAM, DynamoDB)
- Expertise in Spark, Databricks, Presto for processing
- Python and PySpark; CI/CD with Jenkins; Airflow
- Scalable systems design; performance tuning
- Leadership and collaboration; mentoring engineers
🎁 Benefits
- Mastery allowance for learning and growth
- Health & wellness benefits from day 1
- Remote setup with MacBook, stipend, and work-from-home budget
- Remote-first culture with asynchronous collaboration
- Townhalls, AMAs, and quarterly celebrations
- Employee Resource Groups and community events
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!