



- Software Engineering
- Computer Engineering
- Artificial Intelligence
- Philosophy
- Intro to Philosophy
- Core Topics
- …
- Software Engineering
- Computer Engineering
- Artificial Intelligence
- Philosophy
- Intro to Philosophy
- Core Topics



- Software Engineering
- Computer Engineering
- Artificial Intelligence
- Philosophy
- Intro to Philosophy
- Core Topics
- …
- Software Engineering
- Computer Engineering
- Artificial Intelligence
- Philosophy
- Intro to Philosophy
- Core Topics

Fei-Fei Li
Fei-Fei Li’s research centers on giving machines the ability to perceive and understand the visual world, which became a cornerstone of modern AI. She is best known for leading the creation of ImageNet, a massive labeled image dataset that transformed computer vision by enabling deep learning models to learn from millions of examples. This work directly accelerated breakthroughs in image recognition and helped spark the deep learning boom of the 2010s. More broadly, her research spans computer vision, machine learning, and cognitive neuroscience, often drawing inspiration from how humans process visual information.
In more recent years, she has expanded her focus toward human-centered AI, emphasizing systems that work collaboratively with people and align with human values. Her work explores areas like embodied intelligence (how AI interacts with the physical world), healthcare applications, and ethical AI development. Alongside technical contributions, she has been a major advocate for making AI more inclusive, transparent, and socially beneficial—shaping not just what AI can do, but how it should be built and deployed responsibly.
Jensen Huang
Jensen Huang hasn’t worked on AGI algorithms directly, but his impact on the field is hard to overstate because he enabled the hardware foundation everything else runs on. As the co-founder and CEO of NVIDIA, he led the shift of GPUs from graphics tools into the primary engines for AI training and inference. Modern systems—from large language models to multimodal AI depend on massive parallel computation, and NVIDIA’s platforms (like CUDA and its AI chips) made it practical to train models at the scale required for anything approaching AGI.
More recently, Huang has pushed NVIDIA into a full-stack AI infrastructure company, building not just chips but entire ecosystems for training, deploying, and networking large models (data centers, AI supercomputers, simulation platforms). This has effectively made NVIDIA the backbone of the AGI race, supplying the compute that labs like OpenAI, DeepMind, and others rely on. So while he’s not shaping AGI theory, his contribution is decisive: he’s one of the key figures making large-scale AI—and therefore AGI progress—technically and economically possible.
Geoffrey Hinton
Geoffrey Hinton is one of the foundational figures behind modern AI, often called a “godfather of deep learning.” His core contribution was developing and championing neural networks at a time when most of the field had abandoned them. He helped invent and refine key techniques like backpropagation, which allows neural networks to learn from data, and later advanced ideas around deep belief networks and representation learning. His work laid the mathematical and conceptual groundwork that made today’s large-scale AI systems—central to AGI efforts—possible.
Beyond early theory, Hinton played a direct role in the deep learning breakthrough era, including work that helped ignite the ImageNet revolution and prove that neural networks could outperform traditional approaches when scaled. More recently, he has been vocal about both the promise and risks of advanced AI, influencing how researchers and policymakers think about AGI safety. In short, while he isn’t building today’s frontier systems himself, his contributions form the intellectual backbone of nearly all current AGI approaches.
Demis Hassabis
Demis Hassabis is one of the central architects of modern AGI efforts, combining neuroscience, reinforcement learning, and large-scale AI systems into a unified vision of general intelligence. As co-founder and leader of Google DeepMind, he has driven breakthroughs that show AI can learn complex tasks from first principles—most famously with AlphaGo and its successors, which mastered games like Go and chess using reinforcement learning and self-play. These systems demonstrated that AI can achieve superhuman performance in domains that require planning, intuition, and long-term strategy—key ingredients for AGI.
More broadly, Hassabis has pushed toward building AI systems that can learn how to learn, generalize across domains, and contribute to scientific discovery. Under his leadership, DeepMind has expanded into areas like protein folding (e.g., AlphaFold), multimodal models, and agent-based systems that interact with complex environments. His approach treats AGI as a long-term scientific challenge, drawing inspiration from human cognition and aiming to build systems that can reason, plan, and discover knowledge in a flexible, general way—making him one of the most influential figures shaping the direction of AGI research today.
Created With Strikingly.com



