Related skills
aws python kubernetes hadoop airflowπ Description
- Design, build and manage data pipelines integrated into the data warehouse.
- Develop canonical datasets to track product insights.
- Collaborate with customers to translate workflows into product experiences.
- Partner with Infrastructure, Data Science, Product, Marketing, and Research.
- Implement robust, fault-tolerant data ingestion and processing systems.
- Contribute to data architecture and engineering decisions.
π― Requirements
- 3+ years as a data engineer; 8+ years in software engineering.
- Proficiency in Python, Scala, or Java.
- Experience with Azure or AWS and tools like Kubernetes and Terraform.
- Experience with Hadoop, Flink, and storage systems such as HDFS/S3.
- ETL schedulers such as Airflow, Dagster, or Prefect.
- Solid understanding of Spark; write, debug and optimize Spark code.
π Benefits
- Salary: USD 255Kβ325K per year.
- Equity: Offers equity.
- Location: San Francisco, on-site.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!