The conversation on AI takeover and the harm it causes to society is something that has been ongoing since the invention of the internet. These AIs provide the exact same services that workers in various fields do, much faster than how long an individual, or even a team, would take to do it. Even though people are still in power, being the ones inventing these AIs, there is a strong and quick-rising fear of how long it will take before these dynamics change even more than they already have.
Image Source: MaxPixel’s contributors
This long going debate of whether AI serves as a threat to people and AI takeover was recently sparked again when top researchers, including Sam Altman, the chief executive officer of OpenAI, and Demis Hassabis, the chief executive officer of Google DeepMind, Geoffrey Hinton, the “Godfather of AI”, along with hundreds of others, signed a one sentence open letter to the public, published by the Centre for AI Safety (CAIS).
WHAT THE CENTRE FOR AI SAFETY BELIEVES
This open letter aims to overcome the risks and challenges these ever-developing technological advancements carry with them and it reads, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. The multiple signatories on this statement, mainly people who work in AI-related fields, many even inventors of famous AI bots, bring forward the multiple risks these AI developments might bring along with them.
CAIS believes that AI risk has grown into a global priority, and can even be considered in the same sphere as nuclear wars and pandemics. The staff at CAIS state that while AI has its benefits and can help prosper, its advancements can cause it to become more catastrophic and pose greater existential risks. Technological advancements in recent years have been all about survival of the fittest and AI bots exist in practically every field of work now. The mechanization of jobs, the spread of false information, and the use of AI in warfare are some of the biggest dangers that AI carries along with it.
See CAIS statement: https://www.safe.ai/statement-on-ai-risk
Artificial Intelligence has quickly become the forefront of global power play, and industries have been going as far as to use AI to help develop weapon systems. The progression of weaponization due to AI is one major fear that CAIS wishes to tackle. The very idea that it may be possible for artificial intelligence machines to perform tasks in warfare without humans to supervise is bringing forward the fear that these developments in technology may harm the safety of people and cannot be ignored any longer.
We are all well versed in fake news that we see on social media platforms like WhatsApp and Facebook, but what happens when this phony news makes its way to major scientific research? Information produced by AI is used so frequently in every field that even misinformation might be presented in a manner convincing enough for a professional to get swayed.
Artificial Intelligence is still relatively new and its unpredictability and complex nature are significant reasons why we require measures to curb the dangers it brings. As AI develops, so does the growing fear that these developments may pose existential risks and compromise human safety. It is crucial to address these concerns and ensure responsible development and use of AI technology.