
Bhagyesh Kumar
@invi-bhagyesh · AI safety
Sophomore at MIT Manipal (maths + computing). I do AI safety research, not very good at it yet though. I consider myself an Iterator — running experiments, iterating on ideas, figuring out what works. But I hope to eventually become a Connector — someone who can look at empirical results and know what they mean at a fundamental level.
I have previously worked on adversarial attacks [AAAI'26], which led to my broader interest in AI safety and trustworthy ML. Recently, I have been studying formal verification for autoresearch. See my research and what I’m up to now.
Other things I enjoy include competitive programming, reading blogs, and making new friends. Feel free to reach out and say hi at invi.bhagyesh@gmail.com, though I usually respond faster on social apps.
Selected research
Side Effects of Character Training: Quantifying Cross-Constitution Drift in LLMs
ICML'26ICML 2026 Workshop on Pluralistic Alignment(under review)
TopoReformer: Mitigating Adversarial Attacks Using Topological Purification in OCR Models
AAAI'26AAAI 2026 Workshop on AI for Cyber Security(Oral)
What I'm up to now
- May 2026I will be spending this summer at USTC
as a part of summer exchange program, working on Ricci Curvature. - Feb 2026Researching emergent misalignment at Cornell
under SPAR
- Jan 2026Working on evaluation awareness at Algoverse
- Nov 2025Received $1000 scholarship to present TopoReformer at AAAI in Singapore
- Nov 2025TopoReformer(v2) accepted at AAAI Workshop 2026
How I fail
- Nov 2025AAAI Undergraduate Consortium Proposal not accepted
- Oct 2025TopoReformer(v1) rejected from NeurIPS Workshop 2025

.jpg)




