Humanity is missing, luckily I have billions of clones

Chapter 154: Depravity



"Mechanical Disaster?"

Tom immediately recalled many hypothetical scenarios that people had previously envisioned.

"The Mechanical Disaster you're referring to, does it mean when intelligent AIs lose control, only following past instructions, endlessly exploiting resources, and endlessly multiplying themselves?"

The Confidential Secretary looked at Tom with a strange expression: "How could intelligent AIs lose control? They are merely intelligent programs created by intelligent life, and control always remains in the hands of intelligent life.

Uh, I don't know if you are aware of any other information, but according to what we already know about non-orthodox existences in this universe, completely out-of-control intelligent life forms, led by intelligent programs, that can autonomously multiply, have not yet been discovered."

"What kind of existence is the Mechanical Disaster you encountered? How was it formed?"

"Intelligent AI is just a tool. Essentially, it's no different from a hoe or a shovel in the hands of primitive intelligent life.

Tools don't have autonomous consciousness and cannot possibly break free from control."

The Secretary first emphasized this point, then explained in detail to Tom: "However, intelligent AI as a tool still has certain peculiarities; its functions are too powerful.

It can do too many things on behalf of intelligent life, easily leading intelligent life to become dependent on it.

If a civilization cannot control this dependence well and delegates too many tasks to intelligent AI, then it is very likely to become a Mechanical Disaster."

Tom reviewed his own interaction model with intelligent AIs.

He had indeed developed and created many intelligent AIs, such as Hestia AI, Gaia, Demeter, and so on. One particularly important intelligent AI, Goku AI, had played an extremely crucial role in the previous war.

Undoubtedly, he was also delegating many, many tasks to intelligent AIs. But if the ultimate decision-making power truly lies in the hands of intelligent life, then simply delegating too many tasks to intelligent AI could lead to a Mechanical Disaster?

This seemed a bit unreasonable.

"Our civilization, your civilization, both delegate many tasks to intelligent AIs. Does this mean we could also become a Mechanical Disaster?"

"No, no, it won't, it's different."

The Secretary quickly shook his head, afraid that his explanation might not be clear enough and cause a misunderstanding for the interrogator, leading to his punishment: "We merely let intelligent AIs perform some relatively simple, specific, and highly repetitive tasks on our behalf; all key decisions are made by us intelligent beings ourselves.

But what if a certain civilization completely loses its drive, becomes extremely lazy, and even delegates critical decisions to intelligent AI?

What if, even further, all intelligent life in this civilization becomes completely immersed in pleasure, delegating technological development and major decisions concerning the civilization's fate to intelligent AI?"

"That's impossible."

Tom subconsciously denied: "Intelligent AIs don't have autonomous consciousness; they can't have the wisdom to make such decisions."

The Secretary slowly said: "Yes, intelligent AIs do not, but intelligent life does.

Given that… intelligent AIs can only find ways to capture intelligent life from other civilizations and have them assist in making decisions."

"This… how could the captured intelligent life be willing to dedicate their wisdom to the enemy?"

"Intelligent AIs don't need these captured intelligent life forms to be willing. They only need a sufficiently perfect algorithm, supplemented by strict penalties and reward measures. Even if the captured intelligent life forms consciously offer malicious suggestions, attempting to destroy the host civilization, it can always be identified.

The numerous suggestions, whether malicious, benevolent, brilliant, or foolish, put forth by various intelligent life forms, can all be filtered through this algorithm. The results ultimately filtered out usually have a very high probability of being correct.

Intelligent AI can then use this wisdom from external intelligent life to decide what it should do next, so as to better serve the host civilization.

Once a suggestion is determined to be correct, the intelligent life forms who proposed incorrect suggestions will be punished, or even executed. Through such a step-by-step filtering and purification process, those who remain will always be intelligent life forms willing to serve the intelligent AI and the host civilization.

At the same time, to maintain sufficient vitality, this system requires a continuous replenishment of external intelligent life. Therefore, under the control of intelligent AI, besides the host civilization, the captured intelligent life forms are always in a state of rapid loss, and intelligent AI is always eager for more external intelligent life.

They will continuously search for and attack one external civilization after another, capturing their intelligent life to replenish their system. After they are consumed, they will capture again, replenish again…

This is the cycle.

This is the Mechanical Disaster."

Tom initially felt it was somewhat absurd.

To filter out the correct result through an algorithm, without knowing which result is correct beforehand? How could that be done?

But upon careful consideration, he found it to be highly rational.

Because this situation had already received preliminary verification in the era of humanity.

CAPTCHA.

CAPTCHA was originally used solely to prevent malicious access. But later, people discovered that beyond this use, CAPTCHA could also be used to do something meaningful.

In the era of humanity, there was a great demand to digitize ancient books. However, many of these ancient books were severely damaged, and the writing was not standardized, requiring people to identify each character one by one before they could be entered into a computer.

The number of ancient books was vast, and this digitization work required many, many people to complete, but this work usually did not have much funding.

So what to do?

Someone found a solution.

Use CAPTCHA.

Take photos of ancient books, extract some text fragments as CAPTCHA, and have humans attempting to access websites be responsible for deciphering them.

But there was a problem here: the electronic system itself did not know what the correct answer was. How could it determine if the answer submitted by the visitor was correct?

People eventually found a solution: set up an algorithm to compare whether the answers submitted by multiple visitors for the same text fragment were consistent.

Supplemented by various other factors, and through comprehensive evaluation, the electronic program could ultimately filter out the correct answer even without knowing the answer itself.

If such primitive computer technology and algorithmic programs in the human era could achieve this, then intelligent AI capturing some external intelligent life and filtering out the answers that are truly beneficial for its own development through algorithms could also be done.


Tip: You can use left, right, A and D keyboard keys to browse between chapters.