UP | HOME

xAI

There is the principle, which I formulated back in the old days on HN.

There is no way, in principle, to determine the wiring of a processor from the level of the code (which runs on in).

The insight for the principle is based on the universal notions of an abstraction barriers (as what atoms are to Life Itself).

It is related to the fact that mere counting observations and collecting statistics in general will never lead to true understanding of a phenomena.

Another example is that there is no way to discover a structure of the brain by mere introspection and speculation. The completely different “external” social processes of a systematic “scientific inquiry” is required.

Similarly, (based on the same universal principle) shuffling and around (and recombining) words (or parts of them) will never, in principle, produce a true knowledge. A different kind of a process is required.

Knowing the actual causality (what causes what and when) is the only way to know and understand. This is not available to any current forms of AI. It cannot uncover (extract) the underlying causality.

Humanity has evolved the Eastern philosophy of the Mind (how it confuses and pollutes itself with its own bullshit) and the scientific method, as its greatest achievements.

The scientific method, among other things, is the only valid methodology of discovering the actual Causality, which “runs” everything in the Universe.

Every non-bullshit scientists know this, and also the fact that causality is vastly complex, so complex that we prefer to abstract it away as “randomness”. There is nothing “random” in this Universe, which is a whole single process (in which everything is a sub-process caused by something else).

The current incarnations of AI, which juggle (re-arrange) n-grams from various texts will not (and never will in principle) achieve anything of what we call “understanding” and “knowing”.

There is simply no knowledge at this level, only information, which does not encode the underlying causality, just as merely counting of observations or any other form of statistics, and abstract speculations, which are based on anything but true knowledge.

This is a universal principle and there is nothing to be done with it, no matter whom one would hire and how much money one would spend. Information, which is a “result” of the social processes at another level, does not contain the underlying causality.

To put it another way – the monkeys with typewriters will never produce the Upanishads, only in an imaginary world of pure mathematical abstractions as seen by some humanties dropouts.

One more time. Information (of word fragments) does not contain or encode the knowledge. The Knowledge (which is only about causality) is behind a few levels of abstractions from information.

Mathematical theorems and their proofs, which are very special kind of texts, encode knowledge, but when one starts to shuffle n-grams of them one will get bullshit.

There is no knowledge at any other level but the accurate verbalized description of relations between properly captured abstractions, which always can be traced back to the actual recurring patterns from which they have been generalized. Traced back to What Is.

Just as mathematical assertions require proofs, non-bullshit statements about underlying reality requires something – a trace back to What Is (which, BTW, is what all the proofs are).

The information-processing AI is, in principle, incapable of producing valid “proofs” (verifiable connections back to underlying actual reality), only through brute-force and by a pure chance (monkeys with typewriters).

It requires different kind of social processes, similar to the social dynamics which gave rise to mathematics and exact sciences as social phenomena.

OK, teach AI to argue to each other and make whole competing communities of them, but this, again, requires something which is not at the level of merely information processing (re-shuffling).

Yes, AI could play board games with itself or one another, but there is a fundamental gap between this ability and being able to argue about the nature of reality with itself and one another. This is not some backtracking search algorithm or a matching structure created by a back-propagation.

This gap has something to do with the inability to see the wiring of a processor from the level of the code – the inability to see What actually Is from mere speculations based on statistics and false assumptions.

Even if they would be able to argue, that would be, again, a mere speculation based on introspection, observations of effects (but not the causes) and statistics, which would be always wrong. The different kind of a process of knowledge extraction (applied scientific methodology) is required.

So, good luck with that xAI. Send me some check for saving your money from over-spending on some in-principle impossible tasks. The current incarnations of AI is nothing but a highly specialized information processing, and this is not nearly enough.

Author: <schiptsov@gmail.com>

Email: lngnmn2@yahoo.com

Created: 2023-08-08 Tue 18:37

Emacs 29.1.50 (Org mode 9.7-pre)