This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →
← Back to all jobs
Added
21 days ago
Location
Type
Full time
Salary
Not Specified

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Save job

Who we are

CloudWalk is a fintech company reimagining the future of financial services. We are building intelligent infrastructure powered by AI, blockchain, and thoughtful design. Our products serve millions of entrepreneurs across Brazil and the US every day, helping them grow with tools that are fast, fair, and built for how business actually works. Learn more at cloudwalk.io.

Who We’re Looking For

We’re hiring a Data Scientist focused on Large Language Models to join our AI R&D team. You’ll work on designing, training, and optimizing LLMs that power real product features and internal tools. You’ll touch everything from architecture and data preparation to multi-scale training and evaluation.
This is not a prompt engineering or RAG-focused role. We’re looking for someone who gets the math, understands the models, and knows how to write code that scales.
You’ll work side-by-side with engineers, MLOps, and product teams to make our LLMs smarter, faster, and more useful.

What You'll Do

  • Train and fine-tune LLMs using multi-GPU and distributed setups.
  • Analyze model performance, debug failures, and implement improvements.
  • Work with curated and synthetic datasets, including classification and generation tasks.
  • Design experiments, track results, and iterate quickly with tools like MLflow.
  • Write clean, production-ready code and collaborate via GitHub.
  • Push boundaries: think about architecture, memory efficiency, scale, and cost.

What We’re Looking For

  • LLM understanding: You know how these models work under the hood - transformer internals, tokenization, embeddings, etc.
  • Stats and ML fundamentals: You have a solid foundation in statistics, machine learning, and optimization.
  • Coding skills: You write Python well. You’ve worked with PyTorch or JAX. You don’t fear Bash or git.
  • Training experience: You’ve trained models beyond notebooks. Bonus if you’ve worked with mixed precision, DeepSpeed, or multi-node training.
  • Systems mindset: You understand trade-offs - throughput vs memory, latency vs accuracy.
  • Good judgment: You know when to read a paper, when to read a stack trace, and when to rewrite the dataloader or part of the LLM architecture.

Bonus Points

  • Experience with Hugging Face Transformers, Datasets, Accelerate.
  • Familiarity with Kubernetes, Ray, or custom training infra.
  • Exposure to embeddings, classification tasks, or token-level losses.
  • Ability to mentor or guide junior researchers or engineers.

How We Hire

  • Online assessment: Online assessment: technical logic and fundamentals (Math/Calculus, Statistics, Probability, Machine Learning/Deep Learning, Code).
  • Technical interview: dive into theory and reasoning (no code).
  • Cultural interview.
  • If you are not willing to take an online quiz, do not apply.

Diversity and Inclusion

Diversity and inclusion:
We believe in social inclusion, respect, and appreciation of all people. We promote a welcoming work environment, where each CloudWalker can be authentic, regardless of gender, ethnicity, race, religion, sexuality, mobility, disability, or education.

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Remote Data Jobs. Just set your preferences and Job Copilot will do the rest—finding, filtering, and applying while you focus on what matters.

Related Data Jobs

See more Data jobs →