Related skills
java sql python scala airflowπ Description
- Design, build, and maintain scalable data infrastructure for analytics.
- Develop and operate ETL pipelines to ingest, transform, and deliver large-scale datasets.
- Work with Spark, Hive, or similar distributed data processing frameworks.
- Use SQL and data modeling techniques to structure analytics-ready datasets.
- Process large volumes of structured and semi-structured data with Spark and Presto.
- Write production-quality code in Python, Java, Scala, or Go.
π― Requirements
- 5+ years in Data Engineering, building and maintaining data pipelines.
- Strong SQL expertise, including joins, aggregations, unions, and windows.
- Hands-on data modeling and schema design for analytics.
- ETL pipelines using Airflow or similar orchestration tools.
- Experience with Big Data ecosystems: Hadoop, Hive, Spark, etc.
- Programming in Python, Java, Scala, or Go.
π Benefits
- 100% Remote Work: work from the location that helps you thrive.
- Highly Competitive USD Pay: market-leading compensation in USD.
- Paid Time Off: unwind and recharge with PTO.
- Work with Autonomy: manage your time; focus on results.
- Work with Top American Companies: high-impact projects with U.S. firms.
- A Culture That Values You: well-being and work-life balance.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!