I'm a 4th year CS PhD student at Duke, where I am very fortunate to be advised by Rong Ge. Prior to starting my PhD, I worked on risk modeling at Bolt and NLP at Google. And before that, I was an undergrad at UVA, where I was lucky to be advised by Yanjun Qi.
My research interests are broadly in tennis theoretical machine learning, with my current work being focused on calibration of machine learning models. In my free time, I enjoy imagining what it would be like to have a functioning serve or backhand.
You can reach me at muthu at cs dot duke dot edu.
LinkedIn  /  Github  /  X Reassessing How to Compare and Improve the Calibration of
Machine Learning Models
Muthu Chidambaram and Rong Ge
For Better or For Worse? Learning Minimum Variance Features With Label Augmentation
Muthu Chidambaram and Rong Ge
* indicates equal contribution or alphabetical ordering.
What Does Guidance Do? A Fine-Grained Analysis in a Simple Setting
Muthu Chidambaram*, Khashayar Gatmiry*, Sitan Chen, Holden Lee, Jianfeng Lu
NeurIPS 2024
How Flawed is ECE? An Analysis via Logit Smoothing
Muthu Chidambaram*, Holden Lee*, Colin McSwiggen*, Semon Rezchikov
ICML 2024
On the Limitations of Temperature Scaling for Distributions with Overlaps
Muthu Chidambaram and Rong Ge
ICLR 2024
Hiding Data Helps: On the Benefits of Masking for Sparse Coding
Muthu Chidambaram, Chenwei Wu, Yu Cheng, Rong Ge
ICML 2023
Provably Learning Diverse Features in Multi-View Data with Midpoint Mixup
Muthu Chidambaram, Xiang Wang, Chenwei Wu, Rong Ge
ICML 2023
Towards Understanding the Data Dependency of Mixup-style Training
Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge
ICLR 2022 (Spotlight)
Learning Cross-Lingual Sentence Representations via a Multi-task Dual-Encoder Model
Muthu Chidambaram*, Yinfei Yang*, Daniel Cer*, Steve Yuan, Yunhsuan Sung, Brian Strope, Ray Kurzweil
ACL 2019, RepL4NLP Workshop