Related skills
aws snowflake postgresql airflow spark📋 Description
- Join Pune office with 4-6 engineers to scale Pattern's Data Platform.
- Build and maintain Pattern's open source data platform Caterpillar.
- Work on ingestion and transformation pipelines (ETL).
- Optimize pipelines for high throughput and low latency.
🎯 Requirements
- 6-10 years in data engineering, designing ingestion systems.
- Experience building data ingestion/transform pipelines with ETL.
- Proficient in a data-pipeline language (Go preferred; Python/Scala a plus).
- SQL: Postgres/MySQL; strong in advanced queries.
- Strong AWS skills: S3, EMR/Glue, Athena, Lambda, EC2, RDS, IAM.
- Optimize large-scale pipelines: parallel processing/MPP.
- DevOps/SRE familiarity; IaC (Terraform/CloudFormation).
- Automated tests; strong docs and clean code.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!