Related skills
sql python scala kafka apache spark📋 Description
- Provide technical leadership to guide customers to successful big data implementations.
- Design and implement data engineering workloads on Databricks.
- Architect production data pipelines with end-to-end performance testing.
- Specialize in data lake, streaming, or ingestion areas.
- Mentor and train teams; contribute to internal programs.
- Collaborate with Solution Architects to expand Databricks usage.
🎯 Requirements
- Fluent in English and Portuguese or Spanish.
- 7+ years in a technical role with data engineering focus.
- Extensive experience building big data pipelines and maintaining production data systems.
- Deep expertise in ETL scaling, Hadoop-to-cloud migrations, large-scale ingestion, or Delta Lake.
- Bachelor’s degree in CS/IS/Engineering; production SQL/Python/Scala/Java.
- 2 years Big Data (Spark/Hadoop/Kafka) and 2 years customer-facing; travel up to 30%.
🎁 Benefits
- Benefits vary by region; see https://www.mybenefitsnow.com/databricks for details.
- Comprehensive benefits and programs across regions.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!