Related skills
etl python databricks data pipelines performance tuning📋 Description
- Design, build, and maintain scalable ETL pipelines for large-scale data processing.
- Implement PySpark-based data transformations and workflows.
- Develop, manage, and optimize data pipelines with Databricks.
- Optimize pipelines for performance, scalability, and cost efficiency.
- Troubleshoot, debug, and resolve data processing issues.
- Collaborate with cross-functional teams to deliver high-quality data solutions.
🎯 Requirements
- 5–7 years of professional data engineering experience.
- Hands-on PySpark (intermediate–advanced).
- Databricks experience including Autoloader and Python workflows.
- Proven optimization of pipelines for performance and cost.
- Strong ETL knowledge and large-scale data transformations.
- Excellent problem-solving and cross-functional communication.
🎁 Benefits
- 100% Remote Work.
- Competitive USD Pay.
- Paid Time Off.
- Work with Autonomy.
- Work with Top American Companies.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!