Related skills
aws kubernetes databricks hadoop airflowπ Description
- Build and maintain high-scale web scraping operations.
- Create data pipelines to process and enrich raw data.
- Design and build large-scale data infrastructure and production systems.
- Write PySpark applications for large datasets.
- Develop dockerized high-performance microservices.
- Implement solutions in AWS cloud and monitor data pipelines.
π― Requirements
- 5+ years server-side experience in C#, Java, or Python.
- BS degree in Computer Science or equivalent.
- Experience with Hadoop, Spark, Databricks, Airflow.
- Experience with AWS or GCP; familiar with Docker and Kubernetes.
- Experience with web scraping or web technologies is a plus.
- Strong communication, teamwork, and ability to prioritize and work independently.
π Benefits
- Hybrid work model with flexible office time.
- Competitive compensation and benefits package.
- Open culture with opportunities for career growth.
- Diversity and inclusion as a core value.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest β finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!