Related skills
etl python databricks data pipelines performance tuning📋 Description
- Design, build, maintain scalable ETL pipelines for large-scale data processing.
- Implement PySpark-based data transformations and workflows.
- Develop and optimize data pipelines with Databricks.
- Optimize pipelines for performance, scalability, and cost efficiency.
- Troubleshoot and resolve data processing issues.
- Collaborate with cross-functional teams on data solutions.
🎯 Requirements
- 5–7 years of professional data engineering experience.
- Strong hands-on PySpark (intermediate to advanced).
- Solid Databricks experience, including Autoloader, Python-based workflows, and platform best practices.
- Proven experience optimizing data pipelines for performance and cost efficiency.
- Strong understanding of ETL processes and large-scale data transformations.
- Excellent problem-solving and communication skills.
🎁 Benefits
- 100% Remote Work
- Competitive USD Pay
- Paid Time Off
- Work with Autonomy
- Work with Top American Companies
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!