Added
6 days ago
Location
Type
Full time
Salary
Salary not provided
Related skills
terraform aws snowflake postgresql python๐ Description
- Design scalable batch and streaming data pipelines (ETL/ELT).
- Develop infrastructure, integrations and APIs for data storage and delivery.
- Collaborate with clients and domain experts to define data needs.
- Work with data scientists and ML engineers on end-to-end data products.
- Build end-to-end pipelines for impression data and recommendations.
- Explore data lakes, semantic search and ML pipelines.
๐ฏ Requirements
- Hands-on data engineering experience.
- SQL/database experience: PostgreSQL, DynamoDB, Redshift, BigQuery.
- Data platforms/lakes/warehouses: Databricks, Snowflake.
- Python required; PySpark and Pandas; Java/TypeScript bonus.
- Cloud providers: AWS, Azure or GCP.
- Infrastructure-as-code: Terraform, CloudFormation or CDK.
๐ Benefits
- Impact how you work; teams choose approach and technologies.
- Strong supportive community.
- Team that wants you to succeed.
- Sustainable work-life balance and perks like car share and family support.
- Internal training, events and Cloud Academy opportunities.
- DEI and inclusive culture.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!