Related skills
sql python hadoop airflow sparkπ Description
- Design, build, and maintain scalable data infra for analytics.
- Develop and run ETL pipelines for large-scale datasets.
- Work with Spark, Hive, and other distributed processing.
- Use SQL and data modeling to optimize analytics datasets.
- Write production-grade code in Python, Java, Scala, or Go.
- Ensure data reliability across hundreds of ETL pipelines.
π― Requirements
- 5+ years in Data Engineering and pipelines.
- Advanced SQL skills (joins, aggregations, window functions).
- Data modeling and analytical schema design.
- ETL pipelines with Airflow or similar.
- Big Data tech: Hadoop, Hive, Spark, Presto.
- Programming in Python, Java, Scala, or Go.
π Benefits
- 100% Remote Work
- Competitive USD pay
- Paid Time Off
- Work with Autonomy
- Work with Top American Companies
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!