Related skills
aws sql s3 python spark๐ Description
- Design and implement batch and real-time ingestion pipelines on AWS
- Ingest data from databases, APIs, files, streams, and SaaS
- Build fault-tolerant, scalable ingestion frameworks
- Handle CDC from transactional systems
๐ฏ Requirements
- 5+ years in Data Engineering focused on ingestion and pipelines
- Strong AWS data services expertise
- Python (mandatory); SQL required
- Spark / PySpark for large-scale ingestion
- JSON, Avro, Parquet, CSV data formats
- Batch, streaming, API-based ingestion; CDC patterns
- Data lakes (S3) and Medallion architecture Bronze/Silver/Gold
- Nice-to-have: Snowflake, Redshift, Databricks; dbt
๐ Benefits
- Great career growth and development opportunities in a global organization
- A flexible approach to work
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!