Will our Sci-Fi Fantasy become an AI Apocalypse?

Author: Kavya Patel

What do Alexa, Siri, and Cortana have in common? They’re all artificial intelligence (AI) products that are available for use, just with the sound of their name. At first, they were introduced as simple digital assistants but almost overnight, every tech company has been creating their own smarter and more complex version of these products. It makes us wonder if one day Alexa might outsmart us? Or, if Siri might make our decisions by outthinking us? Or if Cortana, and other successors, might eventually take over our jobs? 

In recent years, as we started to develop complex algorithms to make our lives easier, the possibility of AI taking over the world has become a fiercly debated topic. While many argue that AI is not going to “take over the world” anytime soon, we can all agree that it has the potential to do so!

Currently, the AI we interact with is known as Narrow AI(i.e. ChatGPT). It only operates based on what it is programmed to do and it cannot think for itself. This type of AI needs hundreds of thousands of data sets to carry out its tasks. AI uses data that has been fed into it, through input mechanisms and programming, which is then analyzed to generate a response for the end-user (aka whoever asked the question). Humans, on the other hand, require first hand experiences and must acquire small amounts of knowledge so that we can perform our tasks.1 For example, a child can tell the difference between a dog and a cat after a few belly rubs and playful interactions, but Narrow AI needs exposure to hundreds—if not thousands—of data sets before understanding the differences. 

In contrast, Artificial General Intelligence is predicted to have the capabilities of a human, where it would be able to learn and understand like us. However, to even get to this point in technology, researchers need a breakthrough in AI software capable of supporting and exhibiting human-like cognition.2 For example, although ChatGPT appears to be new on the market, the company that created it, Open AI, spent many years developing, enhancing, and testing large language models (LLMs) before launching it for public use.3 

And so, experts have predicted that while AI will continue improving human effectiveness, it can also threaten our autonomy, agency, environment, and other capabilities.4 This can look like depending more on AI to make our decisions, potential abuse of data collected through AI (such as deepfakes), and job losses. McKinsey, an international consulting firm, predicts that this can lead to the loss of 800 million jobs by 2030! It may seem like this is far away in the future, but currently, AI can already play games such as chess, perform surgeries, drive cars, and even fly planes!5 

So, what can we do to stop this from happening? Experts and researchers have come up with different strategies to tackle this, but they agree on two things. One, the risks are global so people must collaborate internationally to address them and two, the five principles of ethics must be addressed in the strategies. The five principles are transparency, justice and fairness, non-maleficence, responsibility, and privacy.6

At the end of the day, even though the threat of AI taking over the world looms over us, scientists, researchers, government officials, and others are working alongside the advancements of AI to ensure that safety and ethical guidelines are in place. Europe has even introduced the first set of AI laws with the goal of ensuring appropriate development and use.7 So don’t disable Alexa, Siri, and Cortana just yet. Remember—they still lack the human touch, so rest assured for now!

  1. Blais, Carolyn. “When Will AI Be Smart Enough to Outsmart People?” MIT Engineering (blog). Accessed October 19, 2024.

  2. Marr, Bernard. “Why AI Won’t Take Over The World Anytime Soon.” Forbes, May 13, 2024.

  3. Marr, Bernard. “A Short History Of ChatGPT: How We Got To Where We Are Today.” Forbes. Accessed November 11, 2024.

    https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/.

    “What Are Large Language Models (LLMs)? | IBM,” November 2, 2023. https://www.ibm.com/topics/large-language-models.

  4. Dargham, Jamal Ahmad, Ervin Gubin Moung, Renee Ka Yin Chin, Mazlina Mamat, and Tze Hock Wong. “Artificial Intelligence (AI) and the Future of Mankind.” In Internet of Things and Artificial Intelligence for Smart Environments, edited by Hoe Tung Yew, Mazlina Mamat, Jamal Ahmad Dargham, Chung Seng Kheau, and Ervin Gubin Moung, 67–82. Singapore: Springer Nature, 2024. https://doi.org/10.1007/978-981-97-1432-2_5. 

  5. Dargham, Jamal Ahmad, Ervin Gubin Moung, Renee Ka Yin Chin, Mazlina Mamat, and Tze Hock Wong. “Artificial Intelligence (AI) and the Future of Mankind.”

  6. Dargham, Jamal Ahmad, Ervin Gubin Moung, Renee Ka Yin Chin, Mazlina Mamat, and Tze Hock Wong. “Artificial Intelligence (AI) and the Future of Mankind.”

  7.  European Parliament. “EU AI Act: First Regulation on Artificial Intelligence,” August 6, 2023. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.

Next
Next

From a Petri Dish to the Dinner Table: Will Laboratories Become the Newest Meat Suppliers?