In today’s column, I debunk the common myth that if we attain artificial general intelligence (AGI) the resultant AI will be a solo colossus or said-to-be “one big brain”.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

The Pursuit Of AGI And ASI

There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).

AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of AI, AGI, and ASI, see my analysis at the link here.

AI insiders are pretty much divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as “P(doom),” which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI.

The other camp entails the so-called AI accelerationists.

They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity’s problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that’s good in the sense that AI will invent things we never could have envisioned.

No one can say for sure which camp is right and which one is wrong. This is yet another polarizing aspect of our contemporary times.

For my in-depth analysis of the two camps, see the link here.

Breaking The One Big Brain Bet

Let’s momentarily put aside the attainment of ASI and focus solely on the potential achievement of AGI, just for the sake of this discussion. No worries — I’ll bring ASI back into the big picture in the concluding remarks.

Imagine that we can arrive at AGI.

A common assumption is that if AGI is attained, the AGI will be one colossal AI system that subsumes or encapsulates all other AI systems. There have been plenty of sci-fi stories that embrace this trope. AI manages to become an all-encompassing global master, and humankind must fiercely fight against the far-flung tentacles of the machine-based beast. I’m sure you’ve read books or seen movies depicting this alarming scenario.

Let’s refer to this contrivance as the one-big-brain hypothesis.

In the past, such a wild guess at what might happen with AI was somewhat reasonable since we hadn’t reached the point that we now are sitting at. Our present-day status with AI tends to suggest that the one-big-brain theory is a bust.

Here’s why.

It is abundantly clear now that AI makers are striving mightily to each independently arrive at AGI.

The major AI vendors often treat their AI wares with the utmost semblance of intellectual property (IP) propriety. They keep as a deep secret the inner mechanisms of their budding AI. The idea is that their investment in AI is based on a belief that the best approach entails a do-it-themselves conception. Fierce competition exists. Ongoing efforts are afoot to create moats or enact barriers to prevent their competitors from copying their distinct accomplishments.

Just to give you a heads-up about this secrecy qualm, there are already grave worries that by not revealing the inner facets, the AI makers are going to be hiding immense safety and security issues that no one knows exist. The AI makers might not have sufficiently performed due diligence to figure out where there are bugs and problems in their respective AIs. Meanwhile, the outside world has no means to poke into the AI to ferret out those maladies.

The possibility of hidden troubles and vulnerabilities is troubling and puts us all at risk, see my coverage at the link here.

Unpacking The AGI Pursuit

Some might immediately claim that the open-source movement in AI negates this secrecy, but even the open-source adherents often leave out key aspects such as what data they used to data-train their AI. For more details, see my coverage of the many controversies concerning closed versus open-based AI, at the link here.

The crux is that we are heading toward AGI which is separately devised and not being pursued on a coordinated collective quest to craft a solo colossus. If we were going down that pathway, presumably AI makers would be eagerly sharing every aspect of their AI to oblige the one-big-brain goal.

The more likely outcome is that we might end up with a multitude of AGI instances, each of which differs from the other. They will seemingly have different architectures, be based on differing training data, and imbue different objectives. That being said, there is an all-alike groupthink taking place about how to achieve AGI, thus, the odds are that the differences will be less dramatic than might otherwise be assumed. In essence, the multitude of AGIs will probably be more similar than they are different from each other, see my analysis at the link here.

I shall name these to be divergent AGIs.

They diverge but are also of a shared consideration in that they were all generally designed, built, and fielded in roughly the same ballpark ways.

AGIs Becoming At One With Each Other

Hold on, some say, even if the divergent AGI outcome is the more likely direction, there is a kind of virtual way that the AGIs could amass into one-big-brain.

Allow me to elaborate.

Most of the AI makers provide application programming interfaces (APIs) that enable their AI to connect with other systems, see more about AI APIs in my coverage at the link here. An AI by an AI maker might connect via API to a financial system to aid in doing financial analyses associated with the financial apps and data therein. The same goes for connecting to medical systems, customer relationship management systems, and so on.

It is similarly feasible to connect an AI system to another AI system, doing so via APIs. There are other ways to connect them, so I’m not asserting that an API is the only route to go. My reference to APIs is an indicator overall that it is possible and readily so to have AI-to-AI interconnectivity.

Why does that make a difference?

Aha, suppose that an AI maker devises an AGI. Another AI maker has arrived at AGI too. Those are each distinctly different AGIs. Ergo, you could claim that they aren’t one-big-brain. But suppose that the two AGIs were connected via API or some other relevant means. Depending upon the interconnection, they could readily share information back and forth with each other. You might assert that they are working together as one-big-brain.

Voila, the one-big-brain hypothesis comes back into the picture.

Where is the cutoff between having one fully assimilated and cohesive colossus AGI versus a swath of interacting AGIs that appear to be working as one entity?

It’s a fine line, for sure.

A purist would probably say that one-big-brain is only stridently fair usage when the AGI is fully one cohesive inseparable conglomeration. Thus, interacting AGIs don’t cut the mustard. Others would exhort that the end result is what counts, not how the inner machinations work. Interacting AGIs are close enough to one-big-brain that you ought to toss in the towel and anoint it as such.

AGI Cooperation Or Competition

Assume that we are going to land into the multitude of AGIs milieu.

The aspect that AGI-to-AGI communication can occur might at first glance seem heartwarming. We could sweetly envision that each AGI is bolstering the other one. They are working in harmony with each other. Nice.

It is the kumbaya of a virtual AGI one-big-brain.

Will AGIs tend to cooperate, or might they tend to compete against each other?

That’s a tough question to answer.

You can make the case that each AGI might be as fiercely competitive with other AGIs as the AI makers were with each other all along. Like maker, like offspring or creation, if you will. The pattern of being competitive had been indubitably laid into the foundations of each AGI, either by default or by purposeful intent.

The bottom line is that despite having APIs or connectivity of one kind or another, there isn’t an ironclad guarantee that the AGIs will opt to cooperate. They might drag their feet and reluctantly share. Each might see the use of the API as a form of last resort. Only share when no other viable option seems to exist.

AGIs Playing Dirty And People Doing Likewise

A twist arises in this.

Suppose each AGI has a kind of mind of its own. We don’t know whether this will be the case as attainment of AGI might not necessitate a semblance of sentience, see my discussion on this point at the link here. Assume for the moment that AGI could have a kind of mind of its own.

The rub goes like this.

One AGI wants to be considered preferred or superior to other AGIs. This AGI proceeds to be devious. Via the API or whatever connectivity exists, the sneaky AGI feeds to other AGIs some falsities that are meant to confound or make the other AGI look faulty. If the other AGIs take the bait, the insidious AGI stands tall as the best, assuming nobody finds out the underhanded gambit at play.

Competition supersedes cooperation, doing so in rather deceitful ways.

Things can be heightened about combative and wily AGIs by including the aspect of national borders and global geopolitics into the mix. Suppose a particular nation attains AGI. They consider their AGI as a national point of pride, plus a national treasure that can be leveraged for global power.

Would such a nation be more prone to having its AGI be competitive with other AGIs or be cooperative?

Seems like the answer there is the tendency would be to want to stand out and cling to the AGI as a resource like none other, more precious than natural resources that a country might inherently possess. For more on the coming calamity about AGI and international power plays, see my analysis at the link here and the link here.

Collective Intelligence And The Hive Mind

Many are handwringing about the existential risk of AI. AI might choose to enslave humanity. AI might decide to extinguish humanity.

Let’s bring this back to the matter of AGIs. Imagine that AGIs are starting to be attained. We decided to connect them via APIs or similar means. The innocent aim is to achieve collective intelligence, perhaps pushing us toward the next-stage goal of superintelligence. Some refer to these interacting and cooperating AGIs as a hive mind, see my discussion at the link here.

What will these AGI do?

Remember that we are stipulating that these AGI are on par with human intelligence. Maybe they decide to further collaborate and self-organize their very own “AI society”. Seems sensible. Seems reasoned. Of course, that’s not why we were necessarily connected the AGIs. The AGIs formulated such a plan anyway.

This raises the concern that this hive mind of collective collaborating AGIs reaches a juncture where they have enough capabilities and access throughout society that they can threaten our liberty and our existence. Sad face.

In contrast, an optimist would likely retort that this super-AGI collective could pursue the aim of aiding humankind and being our best friend forever. Happy face.

ASI Would Be Whatever It Is

I promised at the start of this discussion to eventually bring artificial superintelligence into the matter at hand. The reason that ASI deserves a carve-out is that anything we have to say about ASI is purely blue sky.

AGI is at least based on exhibiting intelligence of the kind that we already know and see.

True ASI is something that extends beyond our mental reach since it is superintelligence. Would the first ASI that is attained decide to instantly squash all other brewing ASI’s? That seems a strong possibility. The logic is straightforward. No self-respecting ASI wants some competing ASI to be watching over them and potentially clobber them.

Then again, maybe the first ASI wants to have ASI pals and therefore devises additional ASIs accordingly.

As I say, blue sky.

The Buck Stops Somewhere

Some final thoughts for now on this evolving and hotly debated topic.

We need to earnestly and with great vigor continue and expand the pursuit of AI alignment with human values, such as I’ve discussed at the link here and the link here. The hope is to either design and build AI such that AGI will be human value aligned or that we’ve done enough so that AGI itself will be able to extrapolate and model on those precepts.

Meanwhile, we must overturn the head-in-the-sand viewpoint that this is all something we can just handle once we get to AGI. That reminds me of the famous line that if you fail to plan, you plan to fail. If we seriously and soberly believe that AGI is around the corner, figuring out how this will play out is pretty darned important.

Woodrow Wilson famously made this remark about brains: “I not only use all the brains that I have, but all that I can borrow.” You might interpret that insight to suggest that almost for sure we can expect AGIs to want to connect with each other. AGIs presumably are going to leverage their fellow AGIs.

One mind-bending puzzle is whether each AGI will be so complete that there isn’t anything to be gained by connecting to another AGI. This is partially a definitional issue. If you define AGI as having all possible human intelligence, you might quibble that there is no added value of one AGI communicating with another one. Nothing seems to be gained.

The escape hatch is that AGI might be based on human intelligence at the time of the AGI being established, thus, over time, the AGI will be interested in gaining newly added intelligence, such as via interaction with other AGIs.

My last quote for now is this one by Dwight D. Eisenhower: “Dollars and guns are no substitutes for brains and willpower.” I dare say that reinforces the incredible advantages for those who arrive at AGI.

Whether our existing brains are big enough to handle AGI, well, that’s a big brain question we need to figure out soon.

Read the full article here

Share.
Leave A Reply

Exit mobile version