Related skills
bigquery python kubernetes ai data pipelinesπ Description
- End-to-End pipeline engineering: design, build, deploy scalable pipelines from APIs.
- Model and transform data to unify sources into high-fidelity data products.
- Architectural improvements: modernize legacy processes into scalable data architectures.
- Operational excellence: observability, automated testing, automation, and self-healing features.
- Investigate complex issues; deep-dive analyses to root causes.
- Drive velocity by applying AI and automation to speed up development.
π― Requirements
- Python for large-scale data processing.
- Data cloud tech and streaming systems (Kafka, Pub/Sub).
- Cloud databases: BigQuery, Spanner; GCP/AWS/Azure.
- Hands-on with Kubernetes in production.
- Experience designing data pipelines and ETL.
- English communication and stakeholder management.
π Benefits
- Flexible time off: autonomy to manage work-life balance.
- Career development: workshops, frameworks, trainings.
- Impactful work: shape products used by 85,000+ users.
- Mobility options: mobility budget or company car.
- Net allowance: support for home office expenses.
- Comprehensive health insurance for you and dependents.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!