This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →
← Back to all jobs

Software Engineer, Data Acquisition

Added
10 days ago
Location
Type
Full time
Salary
Not Specified

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Save job

About Mistral

At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.

We democratize AI through high-performance, optimized, open-source, and cutting-edge models, products, and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.

We are a dynamic, collaborative team passionate about AI and its potential to transform society.

Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, the USA, the UK, Germany, and Singapore. We are creative, low-ego, and team-spirited.

Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact.

See more about our culture on

https://mistral.ai/careers.

Role Summary

We are looking for a skilled and motivated Web Crawling and Data Indexing Engineer to join our dynamic engineering team. The ideal candidate should have a solid background in web scraping, data extraction, and indexing, with experience using advanced tools and technologies to collect and process large-scale data from diverse web sources.

What you will do

As a Software Engineer in the Data Acquisitionteam, you will be responsible for:

• Develop and maintain web crawlers using Python libraries such as Beautiful Soup to extract data from target websites.

•Utilize headless browsing techniques, such as Chrome DevTools, to automate and optimize data collection processes.

•Collaborate with cross-functional teams to identify, scrape, and integrate data from APIs to support business objectives.

•Create and implement efficient parsing patterns using regular expressions, XPaths, and CSS selectors to ensure accurate data extraction.

• Design and manage distributed job queues using technologies such as Redis, Kubernetes, and Postgres to handle large-scale data processing tasks.

• Develop strategies to monitor and ensure data quality, accuracy, and integrity throughout the crawling and indexing process.

•Continuously improve and optimize existing web crawling infrastructure to maximize efficiency and adapt to new challenges.

About you

- Core Programming & Web Technologies

• Proficiency in Python, Java, or C++

• Strong understanding of HTTP/HTTPS protocols and web communication.

• Knowledge of HTML, CSS, and JavaScript for parsing and navigating web content.

-Data Structures & Algorithms

• Mastery of queues, stacks, hash maps, and other data structures for efficient data handling.

• Ability to design and optimize algorithms for large-scale web crawling.

- Web Scraping & Data Acquisition

• Hands-on experience with web scraping libraries/frameworks (e.g., Scrapy, BeautifulSoup, Selenium, Playwright).

• Understanding of how search engines work and best practices for web crawling optimization.

- Databases & Data Storage

• Experience with SQL and/or NoSQL databases (e.g., PostgreSQL, MongoDB) for storing and managing crawled data.

• Familiarity with data warehousing and scalable storage solutions.

- Distributed Systems & Big Data

• Knowledge of distributed systems (e.g., Hadoop, Spark) for processing large datasets.

- Data Analysis & Visualization

• Proficiency in Pandas, NumPy, and Matplotlib for analyzing and visualizing scraped data.

- Bonus Skills (Nice-to-Have)

• Experience applying Machine Learning to improve crawling efficiency or accuracy.

• Familiarity with cloud platforms (AWS, GCP) and containerization (Docker) for deployment.

Hiring Process

Here is what you should expect:

•Introduction call - 35 min

•Hiring Manager Interview - 30 min

•Live-coding Interview - 45 min

•System Design Interview - 45 min

•Culture-fit discussion - 30 min

•Reference checks

Additional Information

Location & Remote

This role is primarily based at one of our European offices (Paris, France and London, UK). We will prioritize candidates who either reside in Paris or are open to relocating. We strongly believe in the value of in-person collaboration to foster strong relationships and seamless communication within our team.

In certain specific situations, we will also consider remote candidates based in one of the countries listed in this job posting — currently France, UK, Germany, Belgium, Netherlands, Spain and Italy. In that case, we ask all new hires to visit our Paris office:

• for the first week of their onboarding (accommodation and travelling covered)

• then at least 3 days per month

What we offer

💰 Competitive salary and equity

🧑‍⚕️ Health insurance

🚴 Transportation allowance

🥎 Sport allowance

🥕 Meal vouchers

💰 Private pension plan

🍼 Parental : Generous parental leave policy

🌎 Visa sponsorship

🛃 Visa sponsorship

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Hybrid Engineering Jobs. Just set your preferences and Job Copilot will do the rest—finding, filtering, and applying while you focus on what matters.

Related Engineering Jobs

See more Engineering jobs →