Reddit is a community of communities. It’s built on shared interests, passion, and trust, and is home to the most open and authentic conversations on the internet. Every day, Reddit users submit, vote, and comment on the topics they care most about. With 100,000+ active communities and approximately 116 million daily active unique visitors, Reddit is one of the internet’s largest sources of information. For more information, visit
www.redditinc.com.
Reddit is continuing to grow our teams with the best talent. This role is completely remote friendly and will continue to be after the pandemic.
We’re looking for a Director of Machine Learning to lead Reddit’s efforts in building industry-leading ML systems that keep our platform safe and foster healthy online communities. This leader will drive the strategy, development, and deployment of machine learning models that detect and prevent harmful content and behavior at scale.
In this role, you will own the roadmap for Safety and moderation ML, lead a team of applied scientists and engineers, and partner cross-functionally across Product, Engineering, Safety operations, Trust & Community, and AI/ML Platform to innovate on real-time detection, automation, and user protection systems. You will leverage modern ML — including fine-tuned LLMs — to ensure Reddit remains a safe, welcoming, and positive environment for our global user base.
Responsibilities:
- Set the vision and strategy for applying ML to Trust & Safety, ensuring scalable, proactive protection against evolving abuse patterns.
- Lead and grow a high-performing Safety ML organization, including applied research, model development, productionization, and continuous improvement.
- Develop and deploy cutting-edge Safety ML systems (including fine-tuned LLMs and transformer models) that outperform state-of-the-art solutions in quality, latency, and efficiency.
- Partner with Trust & Safety, Product, Moderation, and AI/ML Platform teams to identify safety risks, emerging harm vectors, and ML opportunities that improve detection, enforcement, and user experience.
- Drive successful experimentation, evaluation, and model lifecycle management, ensuring high precision, recall, explainability, and policy alignment.
- Champion ethical and responsible AI practices in all Safety ML solutions.
- Track performance through metrics, research-based iteration, and alignment with Reddit’s safety policies and regulatory standards.
- Represent Safety ML leadership internally and externally — including conferences, publications, industry groups, and cross-company collaboration initiatives.
Required Qualifications:
- 10+ years of experience in Machine Learning, AI, or applied research, with a strong background in Trust & Safety, abuse prevention, detection, or content integrity.
- 5+ years of experience leading multi-disciplinary ML teams (applied science, engineering, analytics) in a high-growth or high-impact environment.
- Proven track record of shipping ML systems at scale in production, ideally including transformer-based models and LLM fine-tuning.
- Depth in NLP, content understanding, detection systems, supervised and weak-supervision techniques.
- Strong cross-functional leadership skills, with ability to influence executives and foster alignment across Safety, Product, and Engineering.
- Thought leadership in responsible AI, safety ML research, or safety measurement frameworks.
- Entrepreneurial mindset — experience founding or scaling a product or ML org.
Bonus points if you have:
- Experience building or operating real-time abuse detection and automated moderation systems in a complex user-generated content ecosystem.
- Prior work in consumer-facing tech, social platforms, or large-scale community-driven products.
#LI-SP1