Related skills
aws sql python airflow pyspark๐ Description
- Join Fam's data team as Data Engineer (SDE-1).
- Build a high-performance Data Lakehouse with sub-minute latency.
- Own real-time ingestion and CDC flows from RDBMS to Lakehouse.
- Optimize distributed query engines and accelerate AI-enabled development.
- Develop scalable ETL/ELT pipelines in PySpark and Flink.
- Collaborate with product and analytics teams to drive data decisions.
๐ฏ Requirements
- Experience: 1โ3 years in Data Engineering.
- Coding: Python, PySpark, and SQL.
- Go/Java/Scala is a plus.
- AWS (S3, EKS, MSK) and Infrastructure-as-Code.
- Orchestration: Airflow or Temporal.
- AI-native: use AI tools to write and test code.
๐ Benefits
- Relocation assistance to make your move seamless.
- Free office meals (lunch & dinner).
- Generous leave policy including birthday, period, paternity/maternity.
- Salary advance and loan policies for financial help.
- Quarterly rewards, referral programs with incentives.
- Comprehensive health insurance for you and family.
๐ Relocation support
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Engineering Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!