Related skills
aws sql python hadoop data modeling๐ Description
- Build scalable, fault-tolerant data processing systems for batch and streaming
- Create data pipelines and models to enable self-service analytics
- Ensure data quality and resilience across sources
- Own data mapping, transformations, and data quality
- Debug low-level systems and optimize performance on large clusters
- Maintain and evolve platforms with modern tech stacks
๐ฏ Requirements
- Extensive SQL skills
- Proficiency in Python
- Experience with big data tech: HDFS, YARN, MapReduce, Hive, Kafka, Spark, Airflow, Presto
- Data modeling expertise (conceptual, logical, physical)
- Experience with AWS and/or GCP; Looker a plus
- 8+ years of professional data engineering experience; BS in Computer Science, MS preferred
๐ Benefits
- Global mental health and financial wellness resources
- Healthcare (medical, dental, vision), life and disability insurance
- Retirement options (401(k)/pension)
- Paid time off and personal days
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!