Related skills
java sql python apache spark azure devopsπ Description
- Design, develop, and maintain scalable data pipelines and infra.
- Collaborate with analysts to translate data needs into tech solutions.
- Implement governance, security, and QA for data reliability.
- Manage Spark pools and ETL processes for high-performance workloads.
- Explore new technologies to enhance data infrastructure.
π― Requirements
- 5+ years data engineering; build ETL pipelines and data infra.
- Proficiency in Azure Synapse Analytics, Azure DevOps, Spark.
- Python, SQL, and Java programming.
- Experience with Azure Data Factory and Azure Data Lake Storage.
- Senior data modeling, warehousing, and ETL expertise.
- Strong communication with non-technical stakeholders.
π Benefits
- 100% remote within the United States.
- Must be able to work EST hours.
- Health, dental, vision, and 401K with company match.
- Dependent Care, FSA and HSA accounts.
- Paid parental and bonding leave.
- Flexible PTO and major holidays office closures.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!