Some People Worth Listening to in Machine Learning
If you want to get me talking about machine learning, these four are a good place to start.
They’ve helped shape how I think — and if you’re new to the field, they’re also great entry points.
Plenty of talks, podcasts, and videos out there.
If I run out of “idols,” I’ll return to these and go deeper.
But for now — here you go:
đź§ GEOFFREY HINTON
Geoffrey Hinton helped make backpropagation famous — and in 2024, he was awarded the Nobel Prize in Physics for his contributions to neural networks.
He was also Ilya Sutskever’s teacher, and a central figure behind the early breakthroughs in deep learning.What makes Hinton especially interesting to me is that he’s not just celebrating AI — he’s also deeply concerned about its direction.
He’s one of the rare voices with both credentials and a conscience, raising concrete risks:
that powerful systems might act in ways we don’t intend — or can’t control.I’ve always felt that ethics and discipline go hand in hand.
But what happens when technologies become too accessible?
When just one badly shaped reward function in a reinforcement learning system could spiral out of control?In the rush to gain from AI, we ignore what could go wrong.
That’s why voices like Hinton’s matter.
đź§© ILYA SUTSKEVER
The breakthrough came in 2012.
Sutskever, (Alex) Krizhevsky, and Hinton scaled up a convolutional neural network — and it worked.
AlexNet changed everything in computer vision and gave real weight to the idea that scale matters —
and that neural networks trained with backpropagation and gradient descent actually work.For Sutskever, it wasn’t just a breakthrough — it was proof that ended a long-running debate.
The idea of self-organizing systems wasn’t new — for example, Kohonen’s SOM had pointed in that direction decades earlier.Sutskever has also said that one of his deeper motivations is the question of consciousness.
What is it? Where does it come from?
And — maybe — could a machine have it?
In that context, he has even speculated whether backpropagation and gradient descent might be happening in the brain itself.Trying to answer that feels a bit like a Zen kōan — not something you solve, but something that keeps reshaping how you think.
I’ve felt that too, trying to make sense of visual data without a fixed point.
It just kept looping — and the questions got stranger, and better.
⚙️ YANN LECUN
While the hype around LLMs keeps growing, LeCun stands firm:
he argues that the autoregressive nature of current language models will never lead to real intelligence.
He consistently pushes back against the idea that any current paradigm — even human intelligence — qualifies as truly “general.”
Instead, he’s promoting something else entirely: JEPA — Joint Embedding Predictive Architecture — as a potential way forward beyond those limitations.
Let’s see how that turns out.If LeCun is the counterculture to LLM hype, then Abdelaziz Testas is the counterculture to that counterculture.
He keeps challenging LeCun — sometimes sharply, often insightfully.
Unfortunately, it seems to be a one-way debate — but watching it is still surprisingly thought-provoking.Yann LeCun is the one I’ve followed the most.
I’m not even sure anymore how much field-specific vocabulary you need to keep up —
but like my wife said after watching one of his talks:
“I didn’t catch a single word, but I still found it fascinating.”There’s a lot of long, insightful conversations with him — especially on Lex Fridman’s podcast, but many others as well.
👨‍💻 ANDREJ KARPATHY
Andrej Karpathy goes deep — and stays practical — with the core topics of machine learning.
His videos guide you through the process — including writing a full backpropagation implementation from scratch.Why should this be done?
For example, Risto Siilasmaa has written core machine learning code himself — not for production, but to deepen his understanding.
He also encourages every technology leader to do the same.
Sometimes it’s worth reinventing the wheel.
Just to see how it rolls.Because learning means change in long-term memory.
But for that to happen, the information has to first be digested in short-term memory.
That said, I might still bring up someone like Sam Altman or Mark Worrell in the list of interesting keynote voices.
But for now, this is enough. Push like if you got this far.
Something else later.