Related skills
snowflake sql python airflow spark๐ Description
- Hybrid role based in Bangalore, Pune, or Mohali.
- Design unstructured data pipelines for Vector/Graph DB ingestion.
- Collaborate with Data Platform and engineering teams to shape data infrastructure.
- Build in-house products that enable scalability and operational excellence.
- Develop large-scale data pipelines using modern cloud/big data architectures.
๐ฏ Requirements
- Foundational DBMS concepts: normalization, ACID, transactions.
- Understanding Spark, Hadoop, or Flink.
- Proficiency in Python coding.
- Experience with SQL and Korn Shell or Scala.
- Airflow/Prefect/Dagster; LangChain and AutoGen.
- Experience with Ray or Dask for scalable Python in AI/ML contexts.
๐ Benefits
- Various health plans
- Time off plans for vacation and sick time
- Parental leave options
- Retirement options
- Education reimbursement
- In-office perks, and more
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest โ finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!