Related skills
java sql python go scalaπ Description
- Design, build, and maintain scalable data infrastructure for analytics and reporting.
- Develop and operate ETL pipelines to ingest, transform, and deliver large datasets.
- Work with distributed data processing frameworks (Spark, Hive, etc.).
- Use SQL and data modeling to structure and optimize datasets for analytics.
- Process large volumes of structured and semi-structured data with Spark and Presto.
- Write production-grade code in Python, Java, Scala, or Go.
π― Requirements
- 5+ years in Data Engineering, building data infrastructure and pipelines.
- Strong SQL expertise, including joins, aggregations, unions, and windows.
- Hands-on experience with data modeling and schema design for analytical systems.
- Experience building ETL pipelines using Airflow or similar tools.
- Experience with Big Data ecosystems (Hadoop, Hive, Spark, etc.).
- Programming in Python, Java, Scala, or Go.
- Familiarity with UNIX/Linux environments and shell scripting.
- Understanding of software engineering best practices: testing, monitoring, and documentation.
- Strong collaboration and communication with analysts and stakeholders.
- Ability to troubleshoot data issues across pipelines and BI tools.
π Benefits
- 100% remote work from anywhere with internet access.
- Market-leading USD pay.
- Paid time off to unwind and recharge.
- Work with autonomy: manage your time to deliver results.
- Work with top American companies on high-impact projects.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!