Related skills
redshift aws python apache spark apache airflowπ Description
- Build and maintain scalable data pipelines for analytics and product data.
- ETL/ELT development to ingest, transform, and deliver data.
- Workflow orchestration with Apache Airflow for reliable scheduling and monitoring.
- Leverage Trino/Presto, Spark, and AWS data tools to enable analytics.
- Data modeling and warehousing: schema design and scalable data architecture.
- Monitor pipelines, improve performance, and ensure data quality and reliability.
π― Requirements
- 5+ years of experience building data pipelines or in data engineering or related roles.
- Hands-on with Airflow, Spark, Iceberg, Trino/Presto.
- AWS: S3, Redshift, Glue, Athena, Lambda.
- Python for data processing and pipeline development.
- Data warehousing concepts, schema design, and data modeling.
- Production deployment and support of data pipelines; bonus: Looker/Tableau, governance, CI/CD.
π Benefits
- Generous health coverage for you and family.
- 401(k) match after 60 days, fully vesting after 1 year.
- HSA contribution and an HRA to offset deductibles.
- Flexible vacation policy you can take as needed.
- Volunteer Time Off (VTO) and community events.
- Rest, Relax, and Recharge time for mental health.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!