Related skills
sql python dbt apache spark apache airflowπ Description
- Join Data & AI Engineering to build pipelines powering Cube's regulatory platform.
- Design, build, and maintain data pipelines ingesting structured and unstructured content.
- Model data into clean, analytics-ready assets for AI workloads.
- Work in an Azure-native environment with data architects and AI teams.
- Support platform consolidation and both greenfield and legacy integration.
π― Requirements
- 3+ years of data engineering experience.
- Strong SQL and Python; production-quality code.
- Hands-on data pipelines in cloud environments.
- ETL/ELT patterns and orchestration tools (Airflow, dbt, ADF).
- Experience with structured and unstructured data.
- Knowledge of data quality principles.
- Comfort with version control, CI/CD, and engineering-grade delivery.
π Benefits
- Global brand trusted by top financial institutions.
- Rapid growth and evolving products.
- Inclusive, diverse, and purpose-driven culture.
- Opportunity to shape architecture across platforms.
- AI/ML collaboration with leading engineers.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!