Added
less than a minute ago
Location
Type
Full time
Salary
Upgrade to Premium to se...
Related skills
java aws python hadoop kafka๐ Description
- Build and administer data workflows in IMCโs modern big data environment.
- Develop in-house data tools using Python and Java.
- Build data pipelines with Hadoop, Spark, Kafka, and AWS.
- Work with trading and research teams to develop data solutions.
- Improve the performance of financial analytics on the big data stack.
- Partial telecommuting permitted.
๐ฏ Requirements
- Bachelor's degree in CS/engineering/IT or quantitative field.
- Six months of data engineering in a high-throughput big data ecosystem.
- Build and administer data workflows with Hadoop, Spark, SQL; troubleshoot.
- Develop streaming data apps using Kafka.
- Develop data solutions in Docker or Kubernetes using Java or Python.
- Unix scripting (Bash, tcsh, zsh, Python).
๐ Benefits
- Discretionary bonus and benefits package.
- Paid leave and insurance.
- Collaborative, high-performance culture.
- Global team across US, Europe, Asia Pacific.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!