UP | HOME

AI writing code

I am philosophically inclined guy instead of mathematically inclined, which, at least to me, seems good. I can see things you, people, would not believe.

There is a current hot meme that AI would write code. Well, theoretically, just like an AI process could learn a suitable representation for some repeatedly and consistently observed patterns, it could learn a representation for a function or a whole small algorithm, using standard optimization procedures and back propagation.

Feed it with desired (supervised) inputs and outputs and eventually, after long enough trials and errors it may come up with a representation which consists on Abstract syntax trees (not lines of code) instead of an abstract weighted graph of a certain “architecture”.

There is nothing wrong with this proposal, after all, it is essentially the same evolutionary process which yields stable intermediate forms form a particular mixture of basic small molecules, being “shacked” long enough.

Emerging stable forms is the key principle of almost everything, from molecular biology and Life Itself, to strange social formations, including memes.

But if we look carefully, all the stable forms seems to be already emerged long ago, and the smartest guys at early MIT AI lab have noticed this.

There are distinct patterns in molecular biology - linear sequences of aminoacids with particular emergent electro-chenical properties, which arise out of certain shapes produced by the spontaneous process of protein folding. Nevertheless, a linear sequence is the most fundamental pattern or a form.

What we call “tree-like structures” are another fundamental shape, and individual synapses are such kind of phisical structures. Notice that they are “directed” which means both structurally and as resulting information flows.

Lookup tables are also there, and what we call an enzyme, which is a molecular machine can be abstracted out as a “pure function”, or a mechanical procedure which always produces the same output for the same sets of inputs.

In other words, all thre fundamental stable building blocks are have been emerged long ago, and smart people in good places have already captured all the fundamental generalizations and abstracted them out. It started with idealistic early LISPs and, in another school of thought, with Lambda Calculus-based pure languages.

As you may know, there are only 3 fundamental patterns, out of which everything can be produced - sequences, branching and loops or recursion.

There are just 3 semantic forms enough for everything - lambda abstraction, variable binding and application. Lambda, conceptually at least, corresponds to an enzyme.

At a higher, system level, the fundamental principle is coordination based on asynchronous message passing, and asynchronous is the key. This notion has been captured and generalized in Erlang.

The “agents” are reactive (never proactive or predictive in principle) and, again, fundamentally, maintain their own structural “information” about how to act triggered by a “message” or a signal. This “information” evolves separately.

Simple AI will never be able to “learn” multi-agent systems. It is a different level of “knowledge” and evolution (of an “organism” or a “society”).

At the highest level of the “knowledge” how to act is preserved and “evolved” in what we call a common, shared culture, and it is mostly in a symbolically distilled form (capturing semantics of What Is).

Will AI discover any new fundamental basic forms? Certainly not. They have already been discovered by evolution.

What then AI will “write”? Well, there is a good metaphor - the fundamental cultural difference between Indian and Japanese cooking. Indians are basically throwing everything in a pot or a frying pan and heat it up, while Japanese are trying to carefully select a perfectly matching sets of ingredients, temperatures, durations and so on.

AI will write Indian-style implementations, just like the representations it currently produces. Change a single node and everything is completely distorted, which reflects the fact that the representation is not even remotely optimal. On the contrary it is sub-optimal and grossly redundant (without the information protecting property) by definition.

The fundamental principle is that AI (a trial-and-error on steroids) is incapable of producing a complex layered structure due to various fundamental constraints.

Essentially, it is due to the fact that a depth-first search process once gone to a wrong “branch” will never return, keeping producing vastly sub-optimal crap, which eventually will “work”.

A process analogous of breadth-first search require more than just feedback and back-prop. It requires an adequate representation or “map” of reality - just what the smartest people extracted from observing molecular biology, evolution and life itself.

No AI is capable of doing extraction of semantic knowledge, similar to arithmetic or a plane geometry, and without this capacity it will write utter crap.

By the way, it is absolutely astonishing what LISP/ML family of languages + Erlang has been able to generalize and abstract out. Very few people understand.

So, do not listen to bullshitters, no matter how clever and fluent they sound. Japanese-style cooking, or proper semantic knowledge will always win.

On the other hand, a society of communicating and competing intelligent agents, each of which maintaining and refining its own representation of the shared environment, never losing it after malfunction, and, perhaps, broadcasting parts of it to all other agents, will, at least in theory, come up with a structural shared “knowledge”, conceptually similar to Wikipedia, but at the level of “deeper” that plain text.

Author: <schiptsov@gmail.com>

Email: lngnmn2@yahoo.com

Created: 2023-08-08 Tue 18:39

Emacs 29.1.50 (Org mode 9.7-pre)