Related skills
java sql python kubernetes hadoop๐ Description
- Architect, develop and deploy our Big Data environment (Kafka, Hadoop, Dremio, etc.)
- Build, deploy, and monitor our data processing pipelines (Java, Python, Spark, Flink)
- Collaborate with development teams on data modeling, data ingestion, and capacity planning
- Work with users to ensure data integrity and availability
- Act as a Big Data SME and consult on data-related questions from users and developers
๐ฏ Requirements
- 5+ years in a mature data engineering environment
- 3+ years building Kafka streaming apps and/or Kafka clusters
- 2+ years building apps/pipelines with Big Data backends (S3, HDFS, Databricks, Iceberg)
- Experience with Spark, Flink or similar tools
- Strong Java, Python, and SQL development skills
- Python-based data-science toolkits experience
- Kubernetes and Docker hands-on experience
- Monitoring with Prometheus, Grafana, Alert Manager, Alerta, OpsGenie
- Strong statistical analysis skills
- Root-cause analysis and troubleshooting
- Unix scripting (bash, Python)
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!