on AI
There are some universal principles about any kind of what we call “intelligence”. There principles will show how far away we are from creating anything similar to General Artificial Intelligence (but not something which mimiks and cosplays it).
There is a fundamental, principal difference between something you know (either from the experience of actually doing or by experience of going trough a formal prof, or at least a correct outline of a proof).
When you just remembering something you have hard from others or even observed others were doing, this does not account for (is not) a valid knowledge.
All current AI systems are vastly complex structures trained on mere words (a model “heard” something others are saying) without constructing and validating (through the means of a scientific method and formal logic) verified conceptual hierarchies, which can be used as a valid representation of knowledge and being used for decision making (just like any agent with an inner knowledge representation).
What we call “language-based shared culture” is not a synonym and not a valid “storage” for knowledge (although it carries some along). Memes of various kinds is what a shared culture carries along.
- Every brain builds and maintains an inner representation of reality and consults and uses it (and only it) for decision making and actions.
- The “machinery” which builds and maintains this inner representation has been evolved to match the environment in which it has been evolved.
- All the “animal ingelligence” is of this way - they brain structure (and an inner representation it maintains) has a good-enough “match” of were they are.
- There is a certain pattern in software (and machine translation), where some format (or encoding) first converted into a more general representation (similar to an AST) and then from this representation another format or encoding can be produced.
- Human language-based communication works exactly the same way (the early NLP guys of 80-s got it right) - it verbalizes brain’s inner representation - encodes “knowledge” for communication.
- To be “intelligent” is to have the most adequate, the less distorted, of the smallest possible distance from What Is inner representation of reality and to use it.
- The only way to build such representation… well, it is complicated, but the main principle is not to have any distortion in perception and experience.
- This automatically rules out almost everything written and spoken, especially on social media or internet forums, except, perhaps. very few definitive textbooks of exact (and verifiable) sciences.
- Once one have an adequate inner representation of certain aspects of reality one can express or communicate “knowledge” in either informal, formal or programming languages.
- Programming languages require a very complex process of decomposition into concepts, distilling of proper abstractions (both minimal and optimal), being able to create in a bottom-up processes several layers of DSLs, producing a layered conceptual hierarchy which mimics real hierarchies.
Then there is testing, verification and continuoud refactoring.
Now there is the catch. No adequate representation can be produced by an unsupervised learning.
Period. No adequate representation can be produced by supervised learning of appearances or ready-made results.
Only mimicking can be produced by mere observation of complex behavior.
GPT and ChatGPT mimiks sophisticated talking. They cannot produce an irrefutable bottom-up argument based on the first principles. Only to search to find something similar based on the “weights”.
And never, in principle, can a model understand the whole coneptual hierarchy which leads ot the result it selects. This requires different kind of processes to be evolved.
By merely looking at some notation or behavior it is impossible, again, in principle, to get the underlying reasoning and coceptual hierarchies which produced these observable results.
Exact sciences and concrete math guys know this.