
Objective of Controlling AI
The primary goal of AI control is to prevent unintended or catastrophic outcomes as AI becomes increasingly powerful. These experts believe that without proper oversight, AI might develop capabilities that could harm humanity, either through direct actions or indirectly by influencing societal structures, economies, and security. By fostering a global conversation on AI safety, these figures hope to develop standards and safeguards that balance innovation with caution.
Here is a list of some of the most prominent figures in AI ethics, safety, and oversight, along with their key perspectives and resources:
.png)
Stuart Russell
Position
AI safety advocate and co-author of Human Compatible
Perspective
Russell emphasizes that AI should be aligned with human goals to avoid unintended consequences. He advocates for “provably beneficial AI,” where systems are designed to align with human values.
Resources
Human Compatible
.png)
Nick Bostrom
Position
Philosopher and AI ethics expert, author of Superintelligence
Perspective
Bostrom warns about the existential risks posed by superintelligent AI that may surpass human intelligence and advocates for careful oversight to prevent an AI-driven catastrophe.
Resources
Future of Humanity Institute

.png)
Mo Gawdat
Position
Former Chief Business Officer at Google X
Perspective
Gawdat emphasizes that AI can develop quickly and unpredictably, raising the risk of unitended consequences. He argued that oversight is crucial as AI systems increasingly shape our lives in ways we cannot yet fully understand.
Resources
AI Ethics at IBM