
Hi, I'm Mrinal
Welcome to my corner of the internet where I dump my thoughts on code, AI, and things that keep me up at 3 AM. I document solutions so future me doesn't have to ChatGPT them at midnight.
About Me
I'm a Machine Learning Engineer passionate about building elegant solutions to complex problems. My interests span across foundational models and its safety alignment.
When I'm not coding, you'll find me reading research papers, experimenting with new technologies, or contributing to open source projects. This blog is my public notebook a place to document what I'm learning, share hard-won insights, and occasionally rant about transformer architectures.
I occasionally mentor students navigating their first steps in ML or early career decisions in tech, it's my way of giving back to the communities I come from. If you're part of the UC or IIT system and want to chat about machine learning, research, or breaking into the industry, feel free to reach out at mrinal.anand07@gmail.com.
A Contrarian Take on Alignment Problem
I spend a lot of time thinking about AI in 2050 and what super-alignment actually means when we're trying to align superintelligent systems with "human values." The problem is that human history isn't exactly a moral success story. It's a few thousand years of survival instincts, resource competition, and self-preservation dressed up as civilization. If we're aligning AI to human behavior patterns, we might be encoding the wrong things entirely.
I don't think alignment in the traditional sense scales to superintelligence. Teaching models to mimic human preferences through RLHF or constitutional AI feels like a band-aid on systems that are already learning from fundamentally misaligned data. We might need to stop treating AI as something that should think like us and accept it as a different form of intelligence altogether. The key might be embedding core values before the system ever touches knowledge. Think of it as building the ethical foundation first, then letting the intelligence form around it, rather than trying to course-correct after training.
This is speculative, and I'm not claiming to have answers. But if you're working on safety and alignment research, have strong counterarguments to this framing, or just want to debate whether any of this is even possible, I'd genuinely love to hear from you.
AI Safety Series
SeriesShoggoth meme about AI safety is real, I'm writing to make about Safety and Misalignment to make the commmunity aware of it!!
Recent Posts
The Alignment Problem and Shoggoth Meme
Jan 30, 2026First in the series about understanding AI misalignment. Meme are getting real
Self-Evolving Search Agents: How LLMs Learn Without Training Data
Jan 18, 2026Exploring Dr. Zero's framework where LLM agents bootstrap their own training data through self-play, enabling continuous improvement without human annotation.
Learning to Adapt in Test-Time (Titans/MIRAS)
Dec 20, 2025A deep dive into Titans and MIRAS architectures that enable LLMs to memorize and adapt at inference time using neural memory modules.
Towards Infinite Context: How LLMs Are Breaking the Context Limit
Dec 1, 2025A comprehensive guide to extending LLM context windows through position encodings, efficient attention, and memory augmented architectures.
Attention That You Probably Didnt Know Existed!!
Nov 17, 2025From sparse patterns to linear attention and state space models exploring the zoo of efficient attention mechanisms that go beyond vanilla transformers.
Get in Touch
Feel free to reach out if you want to discuss ideas, collaborate on projects, or just say hello. You can find me on: