Related skills
aws sql python data modeling sparkπ Description
- Design, build, and maintain scalable data pipelines ingesting data from multiple sources
- Develop and operate cloud-based data lake architectures optimized for reliability, performance, and cost efficiency
- Model data to support analytics and reporting use cases, including time-series and event-driven patterns
- Ensure data quality, consistency, and correctness through testing, monitoring, and operational best practices
- Apply governance-aware data practices, including access control and auditability
- Partner with software engineers to translate business workflows into durable data assets
π― Requirements
- 4+ years of professional experience in data engineering or data platform roles
- Strong SQL skills with production experience
- Hands-on experience with large-scale data processing using Spark or similar distributed processing frameworks
- Experience designing analytics-ready data models, such as fact and dimension models or time-based and event-driven schemas
- Experience operating data pipelines in production, including monitoring, backfills, and schema evolution
- Familiarity with cloud-based data platforms and modern data lake architectures
π Benefits
- Remote-first culture that provides flexibility and balance
- Professional development opportunities, including training, mentorship, and career pathing
- Comprehensive health, dental, and vision insurance starting day one
- Short- and long-term disability and basic life insurance at no cost to you
- 401(k) plan with a 4% match to help secure your future
- Flexible PTO and a supportive work culture that values balance
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!