Related skills
gitlab etl python pandas ci/cd📋 Description
- Own the operational health of the Performance Team’s data platform
- Monitor and maintain ETLs and data pipelines for correctness, timeliness, and stability
- Design and maintain data quality checks, reports, and alerting for performance data
- Investigate and resolve data issues and pipeline failures, driving root cause
- Own and troubleshoot GitLab CI/CD pipeline issues related to performance data workflows
- Monitor workloads and resource usage; identify bottlenecks and imbalances
🎯 Requirements
- 3+ years of experience in software engineering or data engineering with an operational or production focus
- Bachelor’s degree in Computer Science or related field strongly preferred
- Proficiency in Python, with hands-on experience using PySpark and Pandas
- Experience supporting production ETL or data processing systems
- Familiarity with Git-based development workflows and CI/CD systems (GitLab preferred)
- Strong analytical skills and attention to detail
🎁 Benefits
- Discretionary bonus and benefits, including paid leave and insurance
- Benefits information available on the Benefits - US page
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!