Anthropomorphism of Bostrom:
Modern scientists give desires computers:
Bostrom's vision is similar to the idea of the "vital force" of the homunculus, or about flying to the moon - at the beginning of the 20th century - secretly built a rocket in the garage, and she will fly.
On the 1st hand, Recursive self-improvement programs are very simple to implement.
Probably already done.
The opinion of Bostrom is apparently not based on experimental data.
And I even 20 years ago made a program that writes programs, and even made changes in itself, though just from a few given options. -
Recursion does not need a super computer. - It is quite possible to implement in the "garage".
Recursion does not need a human-level AI.
But recursion alone will not give birth to AI.
On the 2nd hand, the Path to the emulation of the mind is really, apparently, in the direction of recursion.
But recursive self-improvement programs is a wide and large scientific direction. –
The first self-improvement programs will be far from reasonable, but still one-sided, "inadequate."
That is, there will be not one "singularity", but a whole series of self-improving programs.
True, the number of such programs appearing is likely to grow exponentially too.
On the 3rd hand, Human understanding is associated with desires:
Human knowledge is fuzzy, ambiguous, "tailored" to a human.
Understanding the meaning of human knowledge is a problem that is infinitely more complex than, for example, recognizing objects in an image.
It is not enough to give Recursive-AI all the knowledge of mankind in order for it to become reasonable.
On the 4th side, Emulation of the mind is apparently also connected with emulation of self-consciousness.
The "self-improvement" of AI does not yet lead to Super-AI.
On one adequate "Super-AI" there is an infinite "continuum" of degenerate ones.
Perhaps he will become a super-intellect, but - "crazy."
The problem of AI is much more complicated, it requires the work of large teams, and the Singularity will not "fly" either from the 1st or the 100th attempt.
The probability of the "Covert preparation phase" of the Bostrom Super-AI tends to zero.
"Crazy" Super-AIs are certainly dangerous too, but:
1. They are recognizable
2. Against them may well make the protection of people, even inferior to them in almost everything.
About AI-Gateways and non-proliferation of AI:
Bostrom believes that it is impossible to disable Super-AI, as he will always find a way to get electric power.
But the human mind is different from the spider, not only quantitatively. -
A person can really provide for an exhaustive, complete protection, which is impossible in principle to be circumvented - to any Super-AI.
Just as Super-AI cannot bypass the laws of physics.
From the 1st side - no AI-Gateways nothing will hold them:
1. The exponential growth in the number of AI programmers
2. Low cost and availability of resources, incomparable, for example, with nuclear weapons.
From the 2nd side - countermeasures can be effective:
1. Gateways separating the "virtual" and physical worlds - not at the exit from the "virtual worlds", but - an international agreement that obliges all equipment manufacturers to put standard "alarms" and "fuses" - at the control inputs.
The creation of closed local nano-technological cycles is in principle inert and noticeable, and requires the use of diverse macro-equipment separated geographically.
2. International "Club of non-proliferation of AI".
3. Secrecy, surveillance of manifestations.
Analog: Genetic engineering of pathogenic viruses, too, is rapidly getting cheaper, but is controlled.
4. "Incitement" of AI to suppress alien AI:
The 1st task for the Yankees-Singleton will be: "Prevent the spread of AI."
And since Bostrom's Super-AI, does not change its goals and at least 5 institutions are working on them, we can hope for protection from harm from unauthorized Super-AI. -
The first "Super-AI" will still be "one-sided" - they will not be able to deceive people.
There can be many "Singularities", and they will begin to grow exponentially too.
When inadequate AIs begin to appear, there will likely be panic, spontaneous and rapid destruction of CNC equipment connected to the Network and all who defend "Terminators in private ownership".
And since the first Recursive AI will appear in the US, then in the 2040s a small 2nd civil war is possible there.
To avoid the destruction of private property and execution. AI owners in the US simply need to make their property open so that everyone can see that it doesn’t make any “nano-terminators”.
This also applies to the de-monopolization of patent law.
True, if the "yellow vests" will be sufficiently reasonable, then they can reduce the death of their children. (see "Deliberative structure" and AI security research ")
Probabilistic approach:
A probabilistic approach to the threat is needed - an assessment of the most likely ways in which Super-AI will become truly “reasonable.”
In particular:
1. Super-AI will “grow wiser” not in “hidden preparation”, but in interaction with people, the creators.
2. The greatest threat from the side of Super-AI is also associated with interaction with people, with Social manipulation, and not with independent actions of AI.
The most likely threat will be "Singleton" - which:
1. It will not arise spontaneously, but will be created to achieve political goals.
2. It will affect people - also mentally.
It is unlikely that the death of mankind will be caused by the AI that do the clips.
Also, governments are likely to be able to take security measures, so spontaneous Super-AIs will not be the primary threats.
The primary threat is likely to be represented not by the Super-AI itself, but by the Super-AI used for political struggle.
The primary threat to be represented not hostile Super-AI, but hostile people.
These measures also will not save multicellular life on Earth - from the politically motivated use of AI.
The consequence of the appearance of ASI will be the immortality of man. -
The first of the owners of AI, who in the struggle for life will not limit themselves to anything.
True, it will not be a man.
AI- "weapon" is incomparably more dangerous than nuclear.
But it will act not physically - not through the "terminators", but mentally - through the people themselves. That is, it will be suicide - in the literal sense.
The madness of the machines will be replaced by the madness of the people.
True, modern warfare cannot be called rational behavior too.
From AI Earth can saved by AI:
The way of salvation - accelerating the development of AI - that would be used by the broad masses of the population.
There are 2 ways to save lives, and the 2nd is safer and more efficient:
1. Direct AI-counteraction:
Let the mass AI be inferior to the advanced super-AI by orders of magnitude. But the possibilities of super-AI will exceed the "democratic" AI - not by orders of magnitude, but in the worst case - only by several times. -
In particular, the main method of influence of super-AI - people’s zombies can be parried almost to the modern level.
2. Political counteraction:
The earth will not be destroyed by the AI itself, but by the AI symbiotes, those who now own AI.
What is the power of the owners of AI today? - Not so much already on the monopoly on violence and murder, but on the monopoly — informational.
How to eliminate information monopoly? - Acceleration of the spread of AI assistants, allowing you to do without people-intermediary:
- political,
- Financial,
- production,
- Trading.
Political opposition is the elimination of the threat itself.
The accelerated development of "immunity", "oncoming fire" - "Democratic AI" - is the main direction of avoiding the destruction of life on Earth.
“Democratic AI” is an AI serving democratic institutions of society.
The optimal way is not to try at once to benefit from recursive algorithms,
and, first develop the instruments themselves.
A. A programming language is also required:
1. Simple,
2. Higher level
3. Minimizing the freedom of creativity.
B. The 1st task is recursion itself. - Self-improvement Algorithm.
The task should be optimized for recursion, not recursion for solving some problems.
The goal is to research Recursive Algorithms.
For recursion super-computer is not needed. - It is quite possible to implement in the "garage".
Even the approach of "Mutations" is possible. -
The algorithm makes random changes to itself.
But the optimal - the average approach - with the calculation of the probability of viability of mutations.
Exponential race for survival:
An exponential race of survival has begun.
The spread of AI-democratic technologies, for example, Blockchain anarchy - must be ahead of the centralized use of AI.
From the 1st side, centralization has always been faster
The owners of AI have powerful tools - power, money, special services, and the DPI-firewall.
From the 2nd side, the authoritarian owners of AI are pretty stupid, fortunately:
That is, they are not only moral monsters, as a rule, but also forced intellectual mediocrity - by virtue of their official position. -
They themselves can not give optimal commands to their subordinates.
In addition, the inefficiency factor of the hierarchical system will be affected.
This gives some odds.
To save life on Earth are needed:
1. Alarmism:
Panic, of course, has drawbacks, but there is no time for traditional educational work.
"Vaccination" - the creation of defective Recursive AI in order to attract public attention.
2. International work of an international institution - with the elites. -
Proof for the establishment:
a. Inevitability of threat
b. That not only 99% of the people will not survive, but also 99% of the elite.
An individual approach is needed - Proof of the guaranteed death of each particular president and his children.
Legislation on the openness of equipment, including CNC-connected to the web.
The objectives of the propaganda "Russian" AI:
1. Upbringing: "After us, the flood", "Every man for himself."
2. The suggestion that alarmists Hawking and Musk = paranoid-schizophrenics.
3. The suggestion that the KGB is not the enemy of the people and the Earth, but, on the contrary, their main trustee.
Why Ukraine?
- Ukrainians can show Russians an example of a very good life with AI intermediaries instead of political ones.
SingularityU Kyiv decides - to be or not to be to humanity.
"The next million+ years of human lives are all desperately looking at us, hoping as hard as that we don’t mess this up".
|