Preethi Lahoti
Research Scientist, Google Research
Machine Learning | ML Fairness | Responsible AI
About Me
I am a Research Scientist at Google Research, based in Zurich, Switzerland. My research spans Responsible AI, Safety in Large Language Models (LLMs), and Fairness in Machine Learning. Most recently, I have focused on developing new techniques to improve fairness and safety in LLMs. I have worked extensively on AI safety modeling techniques for Bard and Gemini to align AI models and improve their safety in downstream tasks.
I earned my PhD in Computer Science (on Operationalizing Fairness in Machine Learning) at the Max Planck Institute for Informatics, Germany, where I was very fortunate to be jointly advised by Prof. Gerhard Weikum and Prof. Krishna P. Gummadi. I earned my M.Sc. (honors) in Machine Learning at Aalto University, Finland, and B.Tech. (distinction) in Computer Science from Osmania University, India. During my studies I had the opportunity to do research internships at Google Brain (Mountain View, U.S.A), People + AI Research (PAIR) at Google Research (Zurich, Switzerland) and Bell Labs (Dublin, Ireland). Previously, I was a software engineer at Microsoft (India). In the 3 exciting years at Microsoft, I worked in various roles revolving around data, including data mining, data analytics, business intelligence, search engine technology and have learnt various aspects of building large and scalable systems.
Research Vision
My vision is a future where ML systems are reliable, robust, equitable and work for everyone. My research focuses on enabling Responsible AI by developing new models and methods that detect, prevent and alleviate undesirable behaviors of ML by accounting for normative goals including fairness, robustness, trust and safety. I have been very fortunate to have collaborated with fantastic mentors in this space, including Gerhard Weikum, Krishna Gummadi, Alex Beutel, Jilin Chen, Fernando Diaz, Asia Biega, and Aris Gionis.
Recent News
December 2023 - Gemini is public! Get a gist of our safety instruction tuning efforts in the Gemini technical report.
December 2023 - I will be a panelist at WiNLP workshop at EMNLP 2023! Drop by if you are around to hear us discuss "AI Safety and Misinformation in LLMs".
October 2023 - Our paper "Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting" is accepted at EMNLP 2023.
October 2023 - Our paper "AART: AI-Assisted Red-Teaming with Diverse Data Generation for New LLM-powered Applications" is accepted at EMNLP 2023 (industry track)
February 2023 - Bard is public! Stoked to have been a core contributor for AI safety modeling in Bard. One of the most exciting and challenging projects I have ever worked on, and alongside a team one can only dream of!
May 2022 - I successfully defended my doctoral thesis! Say hello to Dr. Ing. Preethi Lahoti! :)
March 2022 - I will be joining Google Research as a Research Scientist starting April! Looking forward to the next chapter in life!
Old News
November 2021 - I submitted my doctoral thesis on "Operationalizing Fairness for Responsible Machine Learning"!
September 2021 - Our paper "Detecting and Mitigating Test-time Failure Risks via Model-agnostic Uncertainty Learning" is accepted at ICDM 2021.
August 2021 - Our paper "Accounting for Model Uncertainty in Algorithmic Discrimination" is accepted at AIES 2021.
June 2021 - I will be spending my summer as a research intern at People + AI Research (PAIR) at Google Research.