Youssef Ait Alama

Navigate

Youssef Ait Alama

About

Youssef Ait Alama

"I study how machines can learn to invent algorithms, not just use them."

I'm a first-year CS PhD student at UNC Charlotte, advised by Dr. Razvan Bunescu. My research is about algorithm discovery -- I want to understand whether giving a learning system a richer formal description of a task (not just examples of it) helps it find better algorithms faster. Before starting my PhD, I published a paper on fault-tolerant neural network accelerator design that won a Best Student Paper Award at DFTS 2024. I'm still figuring out exactly where algorithm discovery takes me, but the question itself is what I care about.

Research

Research

The core question I'm working on is whether the way you describe a task to a learning system affects the quality of the algorithms it discovers. Most algorithm learning setups give the system examples: here are some inputs, here are the correct outputs, now figure out a procedure that generalizes. That works to a point. But I think there's something important that gets left on the table when you strip the task down to input-output pairs.

What I'm exploring is whether richer formal specifications -- what I call "level 3" definitions, which include hierarchical concept structure rather than just examples -- can guide a learning system toward better algorithms faster. The intuition is close to how humans learn: knowing why a sort is stable, not just that it produces sorted output, might help you invent a new sorting algorithm rather than just reproduce a known one.

The broader gap this connects to: recent benchmarks like AlgoTune (NeurIPS 2025) show that current LLMs fail to produce genuine algorithmic innovations -- they can recombine what they've seen, but they don't discover. That's the problem I want to work on.

I'm also involved in a funded research collaboration I can't discuss publicly yet.

Algorithm DiscoveryFormal SpecificationsHierarchical LearningReinforcement Learning

Publications

Publications

Algorithmic Strategies for Sustainable Reuse of Neural Network Accelerators with Permanent Faults

Best Student Paper Award

Youssef A. Ait Alama, Sampada Sakpal, Ke Wang, Razvan Bunescu, Avinash Karanth, and Ahmed Louri

38th IEEE International Symposium on Defect and Fault Tolerance in Integrated Circuits and Systems (DFTS), 2024

"When neural network chips develop permanent hardware faults, they usually get thrown out. This paper shows how you can instead adapt the network's computations to work around the damage -- extending the useful life of the hardware and reducing waste."

"This paper is from my previous research direction in hardware reliability -- before I pivoted to algorithm discovery. I include it because it's the work I've shipped, and the Best Student Paper award means something to me."

CV

Click any card to prefill a chat question about that topic.

Last updated February 2026.