Youssef Ait Alama

Navigate

Youssef Ait Alama

About

Youssef Ait Alama

"I study how language models represent information internally, and whether better architecture can make those representations cleaner."

I'm a second-year CS PhD student at UNC Charlotte, advised by Dr. Razvan Bunescu. My research sits at the intersection of LLM architecture and mechanistic interpretability. Right now I'm working on how the architecture of a model effects its interoperability: the idea is that standard transformers force each token's hidden state to do two jobs at once (represent the current token and predict the next one), and separating those roles produces cleaner internal representations without hurting performance. Before this, I spent time exploring algorithm discovery and published a paper on fault-tolerant neural network accelerator design that won the Best Student Paper Award at DFTS 2024. The through-line across all of it is a fixation on how systems organize computation internally, whether that's a hardware accelerator routing around damaged circuits or a language model deciding what to store in a hidden state.

Research

Research

In a standard causal language model, each token's hidden state is pulled in two directions at once. It needs to represent the current token faithfully (what word is this, what role does it play in the sentence), but it also needs to encode the prediction for what comes next. These two responsibilities compete for the same vector, and you can measure the resulting "representational drift" using tools like the logit lens: intermediate layers gradually shift away from representing the current token and toward anticipating the next one.

My work explores whether this tension is a fixable design choice rather than an inherent property of autoregressive models. The approach is to introduce a minimal architectural change that gives the model a separate place to do its predictive work, so the main token representations can stay focused on encoding meaning. The key constraint is that the modification should be nearly parameter-neutral. If cleaner representations emerge, they should come from the structural separation itself, not from added capacity.

Preliminary results are encouraging. The architectural change preserves prediction quality (measured by perplexity) while significantly improving how well intermediate representations encode properties of the current token, with the effect concentrating in deeper layers where representational drift is most severe. One interesting exception: representations of syntactic structure actually benefit from the standard setup, suggesting that some amount of cross-token information mixing is useful for encoding hierarchical properties like parse depth. That tradeoff is itself informative about what transformers use their hidden states for.

I'm also involved in a funded research collaboration I can't discuss publicly yet.

Causal Language ModelsMechanistic InterpretabilityTransformer ArchitectureRepresentation Learning

Publications

Publications

Algorithmic Strategies for Sustainable Reuse of Neural Network Accelerators with Permanent Faults

Best Student Paper Award

Youssef A. Ait Alama, Sampada Sakpal, Ke Wang, Razvan Bunescu, Avinash Karanth, and Ahmed Louri

38th IEEE International Symposium on Defect and Fault Tolerance in Integrated Circuits and Systems (DFTS), 2024

"When neural network chips develop permanent hardware faults, they usually get thrown out. This paper shows how you can instead adapt the network's computations to work around the damage -- extending the useful life of the hardware and reducing waste."

"This paper is from my previous research direction in hardware reliability -- before I pivoted to algorithm discovery. I include it because it's the work I've shipped, and the Best Student Paper award means something to me."

CV

Click any card to prefill a chat question about that topic.

Last updated February 2026.