Related skills
sql python apache spark azure data factory apache kafkaπ Description
- Monitor and troubleshoot data pipelines and data warehouses for quality and performance
- Design and maintain scalable ETL/ELT pipelines for client data from APIs and storage
- Collaborate with client engineering teams on data structures and integrations
- Stay updated on data engineering trends and propose process improvements
- Ensure data security, privacy, and compliance via Azure policies and encryption
- Monitor and optimize Azure storage, processing performance, scalability, and costs
π― Requirements
- Bachelor's degree in computer science, information systems, or related field
- 2+ years in cloud data architecture, design and governance
- Strong Azure services: Data Lake Storage, Data Factory, Synapse
- Proficiency in SQL, Python, Scala, or Java
- Experience with Spark, Hadoop, or Kafka on Azure
- Creating and managing documentation on methodologies and tools
π Benefits
- Free premium medical, dental, life and vision insurance
- Generous 401(k) match
- Paid sick leave per policy and applicable laws
- Celebrations and rewards for goals achieved
- Company-sponsored virtual events, happy hours and team-building activities
- Unlimited DTO β time off
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!