Added
2 days ago
Type
Full time
Salary
Salary not provided

Related skills

java sql python go airflow

πŸ“‹ Description

  • Design, build, and maintain scalable data infrastructure for analytics
  • Develop and operate ETL pipelines to ingest and transform large datasets
  • Work with distributed data processing frameworks such as Spark, Hive, or similar
  • Use SQL and data modeling techniques to structure and optimize datasets for analytics use cases
  • Process and analyze large volumes of structured and semi-structured data using Spark and Presto
  • Write production-quality code using Python, Java, Scala, or Go

🎯 Requirements

  • 5+ years of experience in Data Engineering
  • Strong expertise in SQL, including joins, aggregations, unions, and window functions
  • Hands-on experience with data modeling and schema design for analytical systems
  • Experience building ETL pipelines using Airflow or similar orchestration tools
  • Experience with Big Data ecosystems, including Hadoop, Hive, Spark, or related technologies
  • Programming experience in Python, Java, Scala, or Go

🎁 Benefits

  • 100% Remote Work: Work from anywhere with a laptop
  • Competitive USD pay
  • Paid Time Off: rest and recharge
  • Autonomy: manage your time and deliver results
  • Work with top American companies on high-impact projects
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Engineering Jobs. Just set your preferences and Job Copilot will do the rest β€” finding, filtering, and applying while you focus on what matters.

Related Engineering Jobs

See more Engineering jobs β†’