This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →
← Back to all jobs

Machine Learning Engineer (Distributed Training)

Added
20 days ago
Location
Type
Full time
Salary
Not Specified

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Save job
Who we are:

CloudWalk is a fintech company reimagining the future of financial services. We are building intelligent infrastructure powered by AI, blockchain, and thoughtful design. Our products serve millions of entrepreneurs across Brazil and the US every day, helping them grow with tools that are fast, fair, and built for how business actually works. Learn more at cloudwalk.io.

Who We’re Looking For:

We’re looking for a Machine Learning Engineer to own and evolve our distributed training pipeline for large language models. You’ll work inside our GPU cluster to help researchers train and scale foundation models using frameworks like Hugging Face Transformers, Accelerate, DeepSpeed, FSDP, and others. Your focus will be distributed training: from designing sharding strategies and multi-node orchestration to optimizing throughput and managing checkpoints at scale.

This role is not research - it's about building and scaling the systems that let researchers move fast and models grow big. You’ll work closely with MLOps, infra, and model developers to make our training runs efficient, resilient, and reproducible.

What You'll Do

  • Own the architecture and maintenance of our distributed training pipeline;
  • Train LLMs using tools like DeepSpeed, FSDP, and Hugging Face Accelerate;
  • Design and debug multi-node/multi-GPU training runs (Kubernetes-based);
  • Optimize training performance: memory usage, speed, throughput, and cost;
  • Help manage experiment tracking, artifact storage, and resume logic;
  • Build reusable, scalable training templates for internal use;
  • Collaborate with researchers to bring their training scripts into production shape.

What We’re Looking For

  • Expertise in distributed training: Experience with DeepSpeed, FSDP, or Hugging Face Accelerate in real-world multi-GPU or multi-node setups;
  • Strong PyTorch background: Comfortable writing custom training loops, schedulers, or callbacks;
  • Hugging Face stack experience: Transformers, Datasets, Accelerate - you know the ecosystem and how to bend it;
  • Infra literacy: You understand how GPUs, containers, and job schedulers work together. You can debug cluster issues, memory bottlenecks, or unexpected slowdowns;
  • Resilience mindset: You write code that can checkpoint, resume, log correctly, and keep running when things go wrong;
  • Collaborative builder: You don’t mind digging into other people’s scripts, making them robust, and helping everyone train faster.

Bonus Points

  • Experience with Kubernetes-based GPU clusters and Ray;
  • Experience with experiment tracking (MLflow, W&B);
  • Familiarity with mixed precision, ZeRO stages, model parallelism;
  • Comfort with CLI tooling, profiling, logging, and telemetry;
  • Experience with dataloading bottlenecks and dataset streaming.

How We Hire

  • Online assessment: technical logic and fundamentals (Math/Calculus, Statistics, Probability, Machine Learning/Deep Learning, Code)
  • Technical interview: deep dive into distributed training theory and reasoning (no code)
  • Cultural interview
  • If you are not willing to take an online quiz, do not apply.

Who we are excited to welcome: you if you’re passionate about building scaled ML infrastructure and enabling researchers to move fast.

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Remote Engineering Jobs. Just set your preferences and Job Copilot will do the rest—finding, filtering, and applying while you focus on what matters.

Related Engineering Jobs

See more Engineering jobs →