Related skills
snowflake scala hadoop kafka spark📋 Description
- Build scalable, fault-tolerant big data platform for analytics.
- Develop backend services using Java, REST APIs, JDBC, and AWS.
- Build and maintain pipelines with Spark, Hadoop, Kafka, Snowflake.
- Architect real-time data processing workflows and automation.
- Lead projects to develop data processing/reporting features.
- Design GenAI analytics agents using LangChain, LlamaIndex, and integrate with OpenAI/Claude/Mistral.
🎯 Requirements
- 6+ years of Java/backend development experience.
- Strong CS fundamentals: data structures, algorithms, architecture.
- Hands-on with Big Data tools: Scala, Spark, Kafka, Hadoop, Snowflake.
- GenAI experience: LLMs (OpenAI/Claude/Mistral), LangChain, prompts, embeddings, RAG.
- Build/deploy scalable production AI/data systems; end-to-end feature ownership.
- Bachelor’s degree in engineering or equivalent.
🎁 Benefits
- Hybrid work schedule: 3 days in office, 2 remote.
- Healthcare insurance.
- Paternity/maternity leave.
- Broadband reimbursement.
- Kitchen snacks, drinks, and catered lunches.
- Collaborative, global team with growth opportunities.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!