Senior Data Engineer (Databricks) — Data & Analytics Platform | DR
Related skills
etl python databricks data pipelines pyspark📋 Description
- Design, build, and maintain scalable ETL pipelines for large-scale data processing.
- Implement data transformations with PySpark (intermediate–advanced).
- Work with Databricks to develop, manage, and optimize data pipelines.
- Optimize pipelines for performance, scalability, and cost efficiency.
- Troubleshoot, debug, and resolve data processing issues.
- Collaborate with cross-functional teams to deliver high-quality data solutions.
🎯 Requirements
- 5–7 years of professional experience in data engineering.
- Strong hands-on proficiency with PySpark (intermediate–advanced).
- Solid experience with Databricks, including Autoloader and Python-based workflows.
- Proven experience optimizing data pipelines for performance and cost efficiency.
- Strong understanding of ETL processes and large-scale data transformations.
- Excellent problem-solving and ability to diagnose complex data issues.
🎁 Benefits
- 100% Remote Work: work from anywhere.
- Highly Competitive USD Pay: market-leading USD compensation.
- Paid Time Off: generous PTO.
- Autonomy: manage your time and deliver results.
- Work with Top American Companies: high-impact projects with U.S. firms.
- A Culture That Values You: well-being and work-life balance.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!