Divine works


eng deu рус


About the probabilistic approach to threats:

§ Once again about Bostrom's "Anthropomorphism" and AI desires:

Bostrom's "anthropomorphism" is not boils down to the assertion that AI (Artificial Intelligence) cannot have the desire to rule. - Desires may well be emulated.
In the same notorious example with paper clips, the AI just "wanted" to make paper clips.
Bostrom's "anthropomorphism" lies in the fact that Bostrom for AI makes a lot of assumptions that "work" by default for a very specific human mind, but are not at all universal.
Perhaps, spontaneously-recursively, AI will go to infinity.
But it is not a fact that the AI will retain its anthropomorphic "desires" settings.
Recursive Super-AIs will likely to be crazy. Moreover, non-humanly crazy. Moreover, it is unlikely that the crazy - "wild".
To this we must add the arguments of scientists who disagree with the theory of AI alarmists (an argument from emu, from neurosurgery, from childhood, from Gilligan's Island, etc.)

This does not mean that Super-AI will not be dangerous, just Bostrom's categorically unambiguous statements are still incorrect - today and in the near future.
Nevertheless, looking ahead, we can unequivocally state that AI will be more dangerous than all Bostrom's predictions, but more dangerous in a different way.

§ Argument: Action is equal to reaction:

Along with the exponential (spontaneous) growth of AI, exponentially growth the factors hindering to this are growing. In this case, many factors will simply "arise", that is, they will be zero before that.
Bostrom focuses on one factor of AI growth - software (Software).
But the rise of AI is associated with far not only software.

§ Self-cultivation of AI "desires":

In the process of recursive self-improvement, the AI will probably “improve” its emulated “desires” as well.
For example, an AI maximizer of production of paperclip is likely to improve its original task using fuzziness and contradiction of it.
However, this also applies to the fundamentally fuzziness prohibitions to harm people.
This is a big scientific problem - whether human programmers will be able to make AI self-improving, but at the same time keeping unchanged difficult-to-formalized prohibitions inside itself, "so that even after thousands of cycles of self-improvement, the AI value system will remains stable."

On the 1st hand, the "Hippocratic principle" in relation to AI (not to create something that is not safe) is probably similar to an invitation to nuclear destruction of the USA in 1945. This despite the fact that nuclear weapons will not destroy Civilization, but its doom due to AI is only a matter of time.
On the 2nd hand, it is likely that even people competing with each other will calculate as much as possible and prepare for self-"elaboration" of prohibitions.
On the 3rd hand, the probably stupid, vague goal set for AI will no longer be associated with the unconditional destruction of humanity.

§ Anthropomorphism of IQ:

"IQ" was created for a person and is tuned to a very specific human mind.
IQ was created to evaluate soldier recruits and job placement tasks - people.
“In a modern army and in a modern enterprise, a person needs to quickly understand the puzzles of equipment control systems and quickly solve complex production problems.
But:
Firstly, the assessment of the most complex phenomenon in nature with one number cannot but be tuned in an very narrowly specificly. That is, IQ is very conditional in the literal sense of the word.
Secondly, there are optimal solutions, and accordingly - the maximum IQ that allows you to find them. And this maximum IQ is by no means 60,000 - like Bostrom's super-AI, but somewhere in the region of human 200. - If the law of conservation of energy forbids the Perpetual motion machine, then no IQ will help restore the disconnected power supply.

§ "Negativity" IQ:

The main factor of high IQ in people is not positive - not possession of abilities, but a reduced level of "aberrations" of thinking.
However, the "aberrations" of thinking are perhaps useful too. - they, among other things, prevent behavior that is harmful to a person and his family, based, for example, on belief in the super-AI conspiracy theory.
Paradoxically, in difficult conditions with high incompleteness of information, for example, finding himself in a new cultural environment, a something with a high IQ, who is able to solve puzzles well, is likely to lose, since in a person the decisive role in the success of behavior is played not by intellect, but by intuition and instincts.

An indirect argument in favor of the hypothesis about the role of thinking aberrations in IQ is the phenomenon of super-abilities in people - when people in their minds make calculations inaccessible for people with a normal psyche.

Yes, in extremely simplified situations of artificial games, AI gains an advantage over humans, but if the AI is shoved into a human body and thrown somewhere in the criminal area of Caracas, then it is unlikely to survive there.
Yes, over time, super-AI will be able to calculate all difficult circumstances in conditions of high ambiguity and incompleteness of information received from outside. However, the meaning of the "probabilistic" idea presented here is that the threat associated with such progress of AI will far outruned by the threat associated with the intentional use of AI in a "symbiosis" with people-gov.terrorists, in which humans and AI will complement each other.

§ AI's dependence on humans:

All of this does not mean that AI will not grow exponentially.
It just means that
1. There will be no singularity. AI's quasi-exponential growth will be smoother.
2. AI will grow only with the help of a person - when the ruler takes over the provision of all other factors of AI growth. - At least - in the 21st century.
This also means that, as it were, exponentially, but spontaneously-recursively, the AI will grow smarter, the defense will be ahead of it.

§ AI protection - measures - examples:

- Development of the science of unambiguous orders from "Genie"
- AI monitoring with the help of AI, without interconnecting them.
- Development of the science of "moral" control of AI - development of "Rules" not to harm people.
- Development of multi-level non-intelligent, industrial "Firewalls" - from raw materials to microprocessors.
- Development of AI limitations. - An infinitely intelligent AI is infinitely helpless without physical means.
- Development of data protection mechanisms. - AI intelligence is based on external data.

§ The threat is not in the AI itself:

The danger of AI is that all of the above does not work in the case of the purposeful use of AI by the people with power themselves - to manipulate the crowd.
One should not be afraid of the spontaneous development of AI, but of its use as a weapon.

§ The question is in probabilities:

It is impossible to mathematically accurately calculate whether the protection measures against AI are sufficient.

But it is quite possible to assess the comparative probabilities of threats:
a. From spontaneous AI, taking into account protection measures.
b. From the use of AI as a psychic weapon for managing social circumstances.
You can even add to the comparison - the probability of the death of Civilization from all possible threats - an asteroid fall, a nearby supernova explosion, a climatic catastrophe, etc.

Yes, the probability of a catastrophic development of spontaneous AI exists, the point is only that this probability is orders of magnitude less than 100% probability of using AI.

Then the question boils down to the question - can the use of AI to manipulate people lead to long-term negative consequences for people?
More precisely, how probabilityly is an unfavorable outcome herewith?
Negative consequences are likely to arise - due to positive feedback, due to the instability of the process - the more successful the manipulation of people, the more profitable it will be.
Moreover, if the development of the software of the AI itself is difficult to predict, then the use of AI is quite amenable to modeling. - The psyche of the ruling person is well studied, and the human "degradation" of "symbiotes" - the self-change of absolute potentates in the process of their competition - is not an exponential process.

It is on the probabilistic approach that my position is based