UP | HOME

Deep Learning

The fundamental principle behind deep learning (supervised and reinforcement) is that the “right” connections are getting “reinforced” while redundant or “wrong” ones are getting pruned.

The neuroscience meme “wire together – fire together” has the same underlying principle.

Thus we lean a correct (undistorted) representation of some aspects (recurrent patterns) of reality, which serves as an oversimplified but principally correct “map” of vastly complex and dynamic “territory”.

This learned representation is then used to generate other kinds of structured information, just like any good general converter between different formats does.

Learning a representation of a “function” or a transformation is a special case, and there is nothing magical about it. Just like a muscle “learns” from repeated and regular actions. It is, by the way, a correct metaphor to say that a brain is also a set of “muscles”.

What we call “learning” is, in general, a trial-and-error with a feed-back loop process, which, at least in theory, will eventually reach its “fixed point”.

Feedback, which is implemented as back propagation algorithm is the key to everything – it guarantees at least some convergence, or a “perfect match” (but not necessary a global optimum).

Just like it is with biological systems, learning occur continuously through repeated and regular exposure to consistent (non-selfcontradictory) “stable” environment.

Biological systems, in theory, require no supervision (although parents and games are essential) because the environmental constraints act as “filters” of possible experience, so the representation end up perfectly matching the environment in which it evolves.

The crucial part is that biological systems accumulate (via random favourable mutations) and transmit “knowledge” by encoding a better “structures” of brain regions in the DNA. This is how vision or hearing came to be. Language. Everything.

You won’t believe, but that is it. Almost every actual application of AI, including AI-generated hentai and what not, are just these principles being applied and implemented in code.

NNs weights starts at random, while nature uses evolved structure, just like a “template”. A language area does not start as a random mesh of neurons, it is pre-arranged by evolution, just like any other specialized brain area.

Deep learning mimics an evolved way of simple pattern-recognition in a stable environments (where these patterns are recurrent and stable).

This view of deep learning is less wrong, closer to actual reality. Any references to “intelligence” are just bullshit. Pattern recognition is an important first step, but intelligence is far from it.

An “animal” intelligence requires awareness, consciousness capable of introspection (of its emotional states). Higher (human) intelligence require self-consciousness and language-based abstract thoughts.

Again, deep learning in unstable environments is modern day’s alchemy and tantra.

On my old now defunct cite I had a definitive example of how it works.

Butterflies have evolved eye-mimicking spots on their wings without being aware of it, of course, because in the stable shared environment creatures do have eyes.

This is not just being selected by evolution, it “works” because other creatures recognize these patterns as eyes and stay away. Notice that this is noting but pattern-recognition at work.

Since the Upanishads we know that any knowledge is based on experience, end our deductive or inductive reasoning also begins with something “real”, grounded in reality. At least this is how it should be.

But the brain is just layers upon layers of vastly complex, specialized neural networks. The major areas (cortexes) has been evolved to have a particular structure but this structure begins as a template, which get refined with learning by experience (by doing).

This implies that we learn what we have been exposed to, and, necessarily, the constraints of the shared environment, which, by the way, reflected in the structures of our brains (the vision system is not arbitrary and evolved to match this particular planet).

The point is to show that ancient philosophy of mind is still valid and even applicable to artificial neural networks, whoever crude they may be.

Any set of algorithms, supervised and/or reinforced, will learn to match patterns to which it has been exposed (without making any inductive or deductive steps, which requires abstract thinking and language).

The big idea is that the principles of learning and of using neural nets to refine the current representation are the same for all brains.

This also implies that being exposed to (or supervised with) an inconsistent “random” bullshit will inevitably yield bullshit, so neural nets could recognize what isn’t there, just like sick people. This explains repeated utter failures of applying deep learning to “abstract” domains.

The validation tests on a new data will always fail, because the environment has been evolved since then.

Justification of this claim is that the knowledge is in the weights (in current internal states of neurons).

Author: <schiptsov@gmail.com>

Email: lngnmn2@yahoo.com

Created: 2023-08-08 Tue 18:31

Emacs 29.1.50 (Org mode 9.7-pre)