Added
6 days ago
Location
Type
Full time
Salary
Upgrade to Premium to se...
Related skills
aws sql python kafka pysparkπ Description
- Design and implement batch and streaming pipelines with PySpark, SQL, AWS.
- Translate high level direction into scoped, high quality technical solutions.
- Collaborate with product, design, analysts, and engineers to ship data features.
- Contribute to architectural discussions and system improvements.
- Ensure pipeline reliability and data quality with tests and monitoring.
- Investigate and resolve production issues for owned services.
π― Requirements
- 6β7+ years of experience in data engineering with hands-on execution.
- Proficiency with PySpark, Python, and SQL, including debugging and optimization.
- Experience building large-scale pipelines (TB+), with Kafka exposure.
- Strong data modeling, relational DBs, and NoSQL knowledge.
- Experience with AWS services such as EMR, Glue, Athena, Lambda, and S3.
- Iceberg or lakehouse technologies experience (nice to have).
π Benefits
- Hybrid work with hub locations Denver, San Francisco, Nashville, Santiago.
- In-office perks: lunch, commuter stipend, snacks.
- Relocation stipend may be available.
π Relocation support
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!