Related skills
sql python dbt airflow kafkaπ Description
- Own data architecture end-to-end; define storage formats, compute patterns, SLAs.
- Build and run batch data pipelines for notifications with latency guarantees.
- Design canonical data models for analytics, ML, and production.
- Enforce data quality with tests, lineage, monitoring, and reconciliation.
- Automate workflows across services, warehouses, and external systems.
- Enable insights and experimentation with high-quality, self-healing data assets.
π― Requirements
- Production-grade data pipelines with SLAs, monitoring, and alerting.
- Deep SQL expertise: complex models, dependencies, and performance.
- Proficient in Python or SQL; CI/CD and infrastructure-as-code.
- Ingestion: Kafka, Debezium; Transform: dbt, Spark; Orchestrate: Dagster, Airflow.
- Cloud data warehouses: Snowflake, BigQuery, or Redshift.
- Large-scale datasets experience; analytics/ML use cases; systems thinking.
π Benefits
- Generous Holiday and Time off Policy
- Health Insurance options (Medical, Dental, Vision)
- Work From Home Support
- Home office setup allowance
- Monthly allowance for cell phone and internet
- Retirement: 401k with employer match and international pension plans
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!