About the TeamThe Interpretability team studies internal representations of deep learning models. We are interested in using representations to understand model behavior, and in engineering models to have more understandable representations. We are particularly interested in applying our understanding to ensure the safety of powerful AI systems. Our working style is collaborative and curiosit...