I am now a quantitative researcher at Headlands Technologies. I received my PhD in computer science from Duke (2020-2024), where I was very fortunate to be advised by Rong Ge. Prior to starting my PhD, I worked on risk modeling at Bolt and NLP at Google. And before that, I was an undergrad at UVA, where I was lucky to be advised by Yanjun Qi.
LinkedIn  /  Github  /  X* indicates equal contribution or alphabetical ordering.
Reassessing How to Compare and Improve the Calibration of
Machine Learning Models
Muthu Chidambaram and Rong Ge
ICLR 2025
For Better or For Worse? Learning Minimum Variance Features With Label Augmentation
Muthu Chidambaram and Rong Ge
ICLR 2025
What Does Guidance Do? A Fine-Grained Analysis in a Simple Setting
Muthu Chidambaram*, Khashayar Gatmiry*, Sitan Chen, Holden Lee, Jianfeng Lu
NeurIPS 2024
How Flawed is ECE? An Analysis via Logit Smoothing
Muthu Chidambaram*, Holden Lee*, Colin McSwiggen*, Semon Rezchikov
ICML 2024
On the Limitations of Temperature Scaling for Distributions with Overlaps
Muthu Chidambaram and Rong Ge
ICLR 2024
Hiding Data Helps: On the Benefits of Masking for Sparse Coding
Muthu Chidambaram, Chenwei Wu, Yu Cheng, Rong Ge
ICML 2023
Provably Learning Diverse Features in Multi-View Data with Midpoint Mixup
Muthu Chidambaram, Xiang Wang, Chenwei Wu, Rong Ge
ICML 2023
Towards Understanding the Data Dependency of Mixup-style Training
Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge
ICLR 2022 (Spotlight)
Learning Cross-Lingual Sentence Representations via a Multi-task Dual-Encoder Model
Muthu Chidambaram*, Yinfei Yang*, Daniel Cer*, Steve Yuan, Yunhsuan Sung, Brian Strope, Ray Kurzweil
ACL 2019, RepL4NLP Workshop