Related skills
redshift java aws sql python📋 Description
- Design, build, and own scalable backend services and data platforms for analytics.
- Write production-grade Java, Python, or PySpark code with strong engineering practices.
- Develop and operate data ingestion and transformation systems on AWS (Glue, Lambda, DMS, Athena, RDS, Redshift).
- Build data products with clear contracts and SLAs, aligned with data mesh principles.
- Create and manage ETL/ELT pipelines for batch and near-real-time analytics.
- Improve observability, data quality, and reliability of data systems.
🎯 Requirements
- BA/BS in Computer Science or related field, or equivalent experience.
- 7+ years of professional software engineering experience.
- Strong hands-on experience with Java or Python in production systems.
- Strong fundamentals in data structures, algorithms, system design.
- Strong proficiency in SQL for data querying and validation.
- Experience with AWS data/compute platforms: Glue, Lambda, Athena, DMS, RDS, Redshift.
🎁 Benefits
- Mission-driven work powering government.
- AI-driven innovation in the public sector.
- Global team of 800+ employees.
- Open offices across multiple cities including Pune.
- Performance-based culture with growth and internal promotions.
- Collaborative, fast-paced environment.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!