Related skills
jenkins aws python databricks airflowπ Description
- Architect and build distributed data processing systems using Python and Spark on AWS.
- Design and implement end-to-end ETL/ELT workflows ingesting data from diverse sources.
- Lead the Medallion Architecture across Bronze, Silver, Gold layers for scalability.
- Build reusable libraries for data quality, metadata, and pipeline monitoring.
- Create CI/CD processes to automate deployment and testing.
- Enforce data governance, security, privacy, and regulatory compliance.
π― Requirements
- 4+ years of professional data engineering experience building production-grade data platforms.
- Expert Python and Apache Spark; JVM tuning, memory management, and optimized workloads.
- Deep expertise in modern data architecture, data modeling for scalability.
- Proven AWS (primary) or GCP experience with EMR or Databricks.
- Experience with orchestration tools: Airflow, AWS Step Functions, or Prefect.
- Must be located in EST or CST; unrestricted US work authorization; no sponsorship.
π Benefits
- Medical, dental, vision, and basic life insurance.
- PTO and company paid holidays.
- Retirement programs.
- 1% charitable giving program.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!