Related skills
sql databricks apache spark data pipelines data warehousingπ Description
- Design, build, and maintain scalable data pipelines (Spark, Databricks, Delta Lake)
- Lead end-to-end projects from implementation to monitoring
- Research new SaaS platforms and APIs to identify security detections
- Optimize data infrastructure for performance, data integrity, and lifecycle
- Mentor engineers and raise technical standards across the team
π― Requirements
- 7+ years in data engineering with large-scale data systems design/build
- Mastery of Apache Spark (PySpark preferred) and distributed processing
- Expert SQL, data modeling, warehousing and data lake architecture
- Direct cybersecurity, threat intel, or security domain experience
- Proven Databricks production-grade data solutions
π Benefits
- Various health plans
- Vacation and sick time
- Parental leave options
- Retirement options
- Education reimbursement
- In-office perks, and more
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!