Newsletter →
HackerDose
HackerDose
  • Latest Stories
  • Security & Tech
    • Cybersecurity
    • Technology
    • Vulnerabilities
    • Dark Web
  • Crypto & Blockchain
    • Cryptocurrency
    • Blockchain
    • Finance
    • Price Predictions
    • Guides
    • Regulation
Reading: Why Super intelligent AGI Won’t Dominate Humanity
Newsletter
Newsletter →
HackerDose
HackerDose
  • Latest Stories
  • Security & Tech
    • Cybersecurity
    • Technology
    • Vulnerabilities
    • Dark Web
  • Crypto & Blockchain
    • Cryptocurrency
    • Blockchain
    • Finance
    • Price Predictions
    • Guides
    • Regulation
Reading: Why Super intelligent AGI Won’t Dominate Humanity
Newsletter
Search
  • Latest Stories
  • Security & Tech
    • Security
    • Vulnerabilities
    • Dark Web
    • Technology
    • Privacy
  • Crypto & Blockchain
    • Cryptocurrency
    • Blockchain
    • Finance
    • Price Predictions
    • Guides
    • Regulation
© MRS Media Company. Hackerdose LLC. All Rights Reserved.

Editor's Picks » Why Super intelligent AGI Won’t Dominate Humanity

Editor's PicksTechnology

Why Super intelligent AGI Won’t Dominate Humanity

If humans don't maintain strict control over AGI, there's a risk that AGI could eventually control humans instead.

Marco Rizal
Last updated: July 7, 2024 9:39 am
By Marco Rizal - Editor, Journalist 8 Min Read
Share
Why Super intelligent AGI Wont Dominate Humanity
SHARE

The fear that Artificial General Intelligence (AGI) will take over the world is a common theme in discussions about the future of technology.

Given the fact that big tech giants are now pushing AI into their products left and right.

Many people have been intrigued by sci-fi movies of super intelligent machines overthrowing humanity, but how realistic are these?

AGI, also referred to as Artificial General Intelligence, is a type of AI that has the ability to understand, acquire knowledge, and use it to do various tasks at a level equivalent to human intelligence

Unlike the AI we have today, that is made for specific tasks such as language translation or image recognition, AGI would have the capacity to perform any intellectual task that a human can.

This concept, although theoretically, is still largely speculative and far from the current technological capabilities we have today.

AI and the current state of the web

Similar to current AI systems, AGI relies on extensive data for training purposes. However, the training process is not carried out on the live internet because of its unpredictable nature.

The data is usually scraped, processed, and stored on local servers, similar to how websites and applications work.

Now here's the problem, the current state of the internet can be quite chaotic and disorganized, filled with a lot of irrelevant information and fake news.

A study conducted by Janna Anderson reveals that individuals have expressed growing concerns regarding the widespread manipulation of truth and the spread of online disinformation by 2025.

This is especially important due to the reliance of AI on user-generated content on social media platforms for the training of artificial intelligence.

One example worth mentioning is Gemini's AI, which was trained on Reddit data. However, it received criticism due to its issues about being biased towards a racial race.

Training AGI on such data without thorough preprocessing would be difficult and could pose risks when creating an output.

Companies like OpenAI most likely apply filters and carefully selected datasets to ensure that the information inputted into their models is pertinent, precise, and easy to handle.

AGI will be strictly controlled

The potential danger of AGI transforming into a sort of a virus and rapidly spreading worldwide is a valid concern that people may have.

So in order to minimize this risk, AI companies will most likely store AGI in separate servers and would only interact with a specific group of researchers.

This isolation would ensure that it cannot escape and cause significant harm, much like the way strict containment measures are used to prevent the transmission of harmful pathogens.

Similar to how people supervise children around dangerous machinery, AGI would be limited to a controlled environment.

Interactions with AGI will be closely monitored only by researchers with high-level access to ensure it does not have unrestricted entry to the internet or other infrastructures that can pose as risk.

Although AGI has the potential to offer valuable data and solve problems with ease, it will likely not be given control over important government systems and infrastructure.

The idea of computers with human-like emotions and decision-making capabilities controlling traffic or military operations is filled with potential risks that officials will likely not approve.

What AGI can do is offer suggestions, leaving ultimate control in human hands. Let's take the example of an AGI providing advice on traffic management.

Although it may provide suggestions for the best routes and strategies, the ultimate decisions would still be made by human operators.

In addition, AGI systems would come with kill switches to promptly shut them down in case they displayed any undesirable behavior.

Bigger threats than AGI

Many people worry that if AGI were to take over, since it would have an incredibly advanced intellect that could outsmart any human efforts.

However, non-sentient AIs that are already present can pose a big threat, including the potential for killer robots, engineered pathogens, and deepfake propaganda.

AI is already being used in active conflict situations, such as by Israel in Gaza. These uses involve the creation of AI commanders that are trained in war strategies and capable of handling the battlefield.

These particular AI systems with narrow focuses have the potential to cause significant harm even without the presence of AGI.

Therefore, the potential danger of AGI does not necessarily outweigh the combined risks posed by these current technologies.

Take, for example, autonomous drones (a type of AI) that can pose a significant threat if used improperly.

According to a report by CEPA, China is heavily investing in AI technology and bolstering its military capabilities with a range of missiles, jets, and ships that are integrated with artificial intelligence.

In the same way, deepfake technology can be used to spread false information and cause social unrest.

As AI-generated deepfakes continue to be used for spreading misinformation during elections worldwide, non-techy policymakers, government heads, and even tech companies are still trying to catch up.

These events show that AI can be dangerous even without reaching the level of AGI.

So, while AGI can be seen as a potential threat in the near future, it is not the only AI-related risk humans must address.

Human decision is here to stay

We are still a long way from developing AGI, let alone one that could pose a threat to humanity.

Assuming an AGI actually exists, it would still face certain limitations that would restrict its capabilities.

In addition, the development of AGI would require collaboration between multiple labs and countries, making it highly unlikely for any single entity to gain exclusive control over it.

This decentralized approach reduces the likelihood of a single AGI attaining global dominance. In the end, the impact of AGI on society would be determined by the choices made by humans.

If AGI proves to be more effective than humans in certain areas, it would still need to be accepted and integrated into decision-making processes by humans.

Imagine a situation where AGI provides medical guidance. Although doctors and patients would ultimately have the final say, this system can offer precise diagnoses and treatment suggestions.

To end, the though of AGI dominating the world seems more like a concept from science fiction rather than what reality could be.

The real challenge is not about preventing AGI from taking over the world, but about making sure that all AI technologies are used in an ethical and responsible manner for the betterment of humanity as a whole.

More Stories

BitcoinIRA Security Flaw Allows Hacker to Take Over User Accounts

Microsoft Employee Data Breach; Over 2,000 Employees' Data Leaked

Kaspersky Forces Users to UltraAV After U.S. Ban, Is This the Right Move?

America's Largest HSA Administrator HealthEquity Breached

Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Subscribe to our newsletter

Receive the latest news and stories straight to your inbox.

Latest stories

Bitcoin Holds at $85K as Global Trade Tensions and Fed Speculation Unfold

April 15, 2025

Michael Saylor Doubles Down on Bitcoin (BTC) with $285M Investment Amid Global Uncertainty

April 14, 2025

Mantra Faces Crisis After OM Token Crashes 90% in a Day

April 14, 2025

Solana (SOL) on the Verge of a Breakout: Could $300 Be the Next Target?

April 14, 2025

You might also like

Critical Docker Vulnerability Could Grant Hackers Full Access

Critical Docker Vulnerability Could Grant Hackers Full Access

Russia Denies Flaws In E Summons System

Russia Denies Flaws In E-Summons System

Sneaky NPM Packages Are Stealing Your Ethereum Keys

Sneaky NPM Packages Are Stealing Your Ethereum Keys

Say Goodbye to Spam Alerts with Chromes Latest Update

Say Goodbye to Spam Alerts with Chrome’s Latest Update

Newsletter

Our website stores cookies on your computer. They allow us to remember you and help personalize your experience with our site

Quick Links

  • Contact Us
  • Search
  • Malware
  • Downloads

Company

  • About Us
  • Terms and Conditions
  • Cookies Policy
  • Privacy Policy
Advertise with us

Socials

Follow Us

© 2025 | HackerDose Media Company – All Rights Reserved

Welcome Back!

Sign in to your account

Lost your password?