Related skills
bigquery sql airflow kafka spark๐ Description
- Build, operate, and improve production data systems.
- Translate business needs into batch and streaming pipelines.
- Build data pipelines with Beam, Spark, or Flink.
- Improve observability and data quality across pipelines.
- Apply AI-enabled development to speed delivery.
- Collaborate with product, analytics, and stakeholders.
๐ฏ Requirements
- Bachelor's degree in CS/Engineering/Math or related field.
- Data engineering fundamentals: batch and streaming.
- Experience with Beam, Spark, or Flink; Dataflow runners.
- SQL and data modeling; BigQuery/Snowflake/Redshift.
- Kafka and event-driven architectures on GCP/AWS/Azure.
- Airflow/Composer/Prefect for workflows.
๐ Benefits
- Global, inclusive workplace and growth opportunities.
- Collaborative, ownership-driven culture.
- AI-enabled tooling to improve throughput responsibly.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!