Related skills
java python scala apache spark pyspark๐ Description
- Build data lakes and large-scale ingestion pipelines
- Work on batch/grid, micro-batching, and stream processing
- Deliver elastic, scalable data solutions for clients
- Collaborate in cross-functional teams and communicate with stakeholders
- Be proactive in training and career development
๐ฏ Requirements
- Big data infra: data lakes, warehouses, ingestion pipelines
- RDMS, cloud storage, HDFS, NoSQL
- Python/Scala/Java programming
- Spark, PySpark, BigQuery
- Parquet/Avro, columnar storage, partitioning
- TDD/BDD, SOLID, Clean Code
๐ Benefits
- Highly competitive salary
- Private healthcare for you and family
- Company pension and equity plan
- 27 days leave + bank holidays
- Sabbatical options at 5 and 10 years
- 5 days study leave
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!