Related skills
bigquery snowflake airflow spark flink๐ Description
- Own architecture for large-scale batch/streaming data pipelines (Beam/Spark/Flink) with Dataflow.
- Lead design reviews, decisions, cross-team delivery; mentor engineers.
- AI-first mindset: guardrails for responsible AI; apply AI to data problems.
- Lead data warehouse/lakehouse design; modeling standards; cost optimization.
- Design event-driven streaming architectures with schema evolution and idempotency.
- Establish orchestration with Airflow, Dagster, or Prefect; CI/CD and observability.
๐ฏ Requirements
- Strong track record delivering production-grade data pipelines end-to-end.
- Distributed data processing experience: Spark, Beam, Flink.
- Cloud data warehouses: BigQuery, Snowflake, Redshift, Databricks.
- Hands-on with event streaming platforms; schema management, replay, backfill.
- Orchestration tools: Airflow, Dagster, Prefect; CI/CD and observability.
- Data governance and compliance: PII handling, access controls, auditing.
- Collaboration with ML/DS teams to productionize features and monitoring.
๐ Benefits
- Global team with colleagues across 190 countries.
- Equal opportunity employer; commitment to diversity and inclusion.
- Opportunities for growth, learning, and professional development.
- Collaborative, accountable working culture that values impact.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!