Related skills
sql python databricks tensorflow spark📋 Description
- Design, develop, and deploy scalable AI data solutions.
- Build data pipelines, preprocessing workflows, and feature engineering.
- Collaborate with product managers, developers, and stakeholders.
- Deploy in secure, containerized environments with CI/CD.
- Leverage cloud data platforms such as Databricks, Palantir, AWS.
🎯 Requirements
- 4+ years in applied data science or ML engineering.
- Proficient in Python, SQL, and distributed data frameworks (Spark/Databricks).
- Experience deploying ML models with scikit-learn, TensorFlow, XGBoost, MLflow.
- MLOps experience, API development, and secure cloud environments.
- Strong data validation, model testing, and performance evaluation.
- Data visualization with Tableau, Plotly, or Matplotlib.
Meet JobCopilot: Your Personal AI Job Hunter
Automatically Apply to Data Jobs. Just set your
preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.
Help us maintain the quality of jobs posted on Empllo!
Is this position not a remote job?
Let us know!