This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →

Copy of Mid Level Data Scientist (Azure)

Added
25 days ago
Type
Full time
Salary
Not Specified

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead

Related skills

azure

We are:Wizeline, a global AI-native technology solutions provider, develops cutting-edge, AI-powered digital products and platforms. We partner with clients to leverage data and AI, accelerating market entry and driving business transformation. As a global community of innovators, we foster a culture of growth, collaboration, and impact.With the right people and the right ideas, there’s no limit to what we can achieve

Are you a fit?Sounds awesome, right? Now, let’s make sure you’re a good fit for the role:Key Responsibilities

  • Design, develop, and optimize Databricks notebooks to process large volumes of data on Azure.
  • Translate business rules into PySpark code, developing robust and scalable solutions.
  • Read and process data from various sources, primarily Delta Lake tables.
  • Apply complex transformations on Spark DataFrames, including:
  • Data cleaning and preparation.
  • Creation of new columns and derivation of metrics.
  • Use of advanced functions such as Window Functions.
  • Execution of different types of joins and data combinations.
  • Write and update results in Delta tables.
  • Refactor and optimize existing notebooks to improve performance and readability.
  • Manage version control and notebook integration using Azure DevOps and Git.
  • Actively collaborate in code reviews through Pull Requests

Must-have Skills

  • Expert-level experience in Azure Databricks
  • Solid experience with PySpark and Spark DataFrames
  • Strong hands-on expertise in Delta Lake (ACID transactions, schema evolution, optimization techniques)
  • Proficient in Azure DevOps (Repos, Pipelines, CI/CD workflows)
  • Strong Git skills (branching strategies, pull requests, code review collaboration)

Nice-to-have:

  • AI Tooling Proficiency: Leverage one or more AI tools to optimize and augment day-to-day work, including drafting, analysis, research, or process automation. Provide recommendations on effective AI use and identify opportunities to streamline workflows.
  • Solid experience in data manipulation using PySpark.

  • Knowledge of cloud-based architectures, ideally Azure.

  • Experience working with collaborative notebooks and version control in data environments.

  • Ability to translate business processes into reproducible technical solutions.

What we offer:

  • A High-Impact Environment
  • Commitment to Professional Development
  • Flexible and Collaborative Culture
  • Global Opportunities
  • Vibrant Community
  • Total Rewards

*Specific benefits are determined by the employment type and location.

Find out more about our culture here.

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs →