Related skills
java snowflake python go scala📋 Description
- Design, build, and operate scalable data ingestion pipelines into Block's Lakehouse.
- Develop Kafka Iceberg connectors and data loading frameworks for low-latency delivery to Snowflake and Databricks.
- Modernize Block's CDC platform with cloud-native replication and Iceberg ingestion.
- Build self-service tooling and observability to onboard, monitor, and troubleshoot data pipelines.
- Collaborate with data engineering and product teams to define data contracts and reduce coupling.
- Design and implement PII detection, masking, and privacy-compliant data handling per policies.
🎯 Requirements
- 8+ years of experience in software engineering or data platform development.
- Proficient in Java, Python, Scala, or Go with data framework development experience.
- Hands-on with streaming systems like Apache Kafka, Kafka Connect, or similar.
- Solid understanding of CDC, database replication, and lakehouse architectures.
- Experience with Iceberg or Delta Lake formats.
- Experience with cloud ecosystems (AWS, GCP, or Azure) and IaC tools.
🎁 Benefits
- Remote work, health insurance, flexible time off, retirement plans, and family planning benefits.
- Inclusive interview process with accommodations for applicants.
- Access to Block’s employee benefits.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!