Related skills
snowflake sql python databricks dbt📋 Description
- Design, implement, and maintain scalable batch/real-time data pipelines.
- Build ETL/ELT workflows with Spark, Databricks, dbt, Snowflake.
- Translate platform architecture into executable data solutions.
- Automate data orchestration for continuous processing and delivery.
- Ensure data security, reliability, and performance in healthcare data systems.
- Collaborate with governance/architecture teams to meet regulatory needs.
🎯 Requirements
- Bachelor's or Master's in CS/Engineering or related field.
- 6+ years in data engineering, with 3+ in cloud-native/large-scale data systems.
- Healthcare data experience in provider space preferred; EHR exposure.
- Experience designing pipelines moving healthcare data with data quality checks.
- Proficient in Python, SQL, and Apache Spark.
- Familiar with FHIR, HL7, EDI and health terminologies.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!