UP | HOME

Vedanta

People who have studied logic and mathematics of sets and function composition have noticed some commonalities, generalized them, and later discovered that their generalized abstractions are of the same “patterns”.

The “things” that come up (noticed by the mind of an external observer) again and again are steps (“arrows”), environments (“being present in the same locality”) and contexts (local frames, “packets” of the environment).

They also have noticed that there are “forks” and “joins” everywhere, and even if they looks the same once you rotated one, they are fundamentally different things. Forks exists only as “hypothetical possibilities” and “alternative paths” or “possible choices”, while a process (the actual path being taken) is always “linear” (without any forks in principle).

Forks are tricky. When you ask yourself what determines which path will be taken in a pure-logical (or mathematical) declarative description of a process, the answer is always - this depends on a current context, which is being passed along as an implicit parameter (withing a bigger “closure”).

What is a closure? Well, it is a generalization of a snapshot of you and your locality (the relevant part of the global environment). The old-timers who designed Scheme and early Lisps used the right terminology. What you can or cannot do depends on your locality (local environment) and your internal state (your inner environment or your current context).

It is somehow “naturally follows” that the environments (bindings), required local contexts (frames), just like closures, has to be maintained by the runtime, to provide a stable “planet” for processes. Scheme and Erlang guys got it right, and. efficiency aside, nested lists of pairs were enough for everything.

These universal notions connect logic, math and functional programming back to reality, to What Is, from where it arose and came to be.

Never try to understand things backwards - Reality (comes) First, not the Mind. The mind observes common patterns, abstracts out and generalizes, not the other way around, like the Platonist idiots would be shouting at you. Knowing just that will take your programming (math and logic) to the next level.

There is another principle to understand. Mother Nature have evolved (by the process of trial and error) some intermediate stable forms and patterns upon which everything Is, including Life Itself.

One such pattern is that an information has been encoded in some supposedly and hopefully immutable (actually immutable from the Life’s point of view) and this information is being interpreted and is running by a “runtime environment”, which is what a single cell is.

To be precise, the cell is the “language runtime”, and the universe is the “computer”.

What connects these universal notions - the classic logic and a cells - is that the underlying patterns are not just similar, they are exactly the same. There are environments, contexts and sequential processes.

The “forks” are encoded in the DNA but only one alternative is selected (when not the result is what we call a cancer).

To put it simple, locality is a physical notion, while alternatives or forks are information. Yes, information (about the environment) is “real”, and the fundamental principle is that it must match the actual environment “perfectly”.

Information is also a blueprint (or a template) for an actual process, and this is, again, the right terminology by the famous MIT guys. This brings us to the conclusion that an implementation of a pure, declarative language (such like Haskell) by a graph-reduction runtime (which is an evaluation of a pure, declarative, well-typed abstract state-machine by an impure imperative electrical machine) is the right thing to do. This is how cell evaluates its DNA (well, mRNA).

Notice, that unnecessary, redundant abstractions (which is the root of all evil) do not “survive” in the evolutionary setting, being eventually replaced by “better” (closer to What Is) ones. Also notice that the so-called “technical debt” is very real and it is literally building on top of “crappy structures” (which eventually cannot be undone (replaced) by the process of Evolution).

Fortunately, good programmers, can change the core types early and quicky once we maintain the proper stable interfaces (abstraction barriers). This is also a universal pattern from biology and this particular one makes evolution (of stable intermediate forms) faster.

Of course, this subject is wast. There are the fundamental, universal notions of nesting, of abstraction barriers (they are real, as real as atoms from which DNA has been built), enzymes which some are operationally pure functions (context-independent) parameterized by ions, and some are procedures (which depend upon a current context - presence or absence of certain “things”).

Some enzymes are “blocking” (until required ion is present), some are “asynchronous” - they just emit structures, but the universal principle is that an environment, local contexts and sequential processes are there.

So, when we look at things like Haskell or Scala 3, we should try to zoom through the layers of abstractions and to see the Reality through the veil of abstract mental constructions the Mind pollutes and deludes itself with.

Zoom in and out, see (and break) through. This is the right understanding.

It is, of course, not possible to put everything into a single text file, but it possible to sketch a big picture and point to some of the right principles, from which everything follows.

Everything what is not bullshit is reducible back to What Is and its universal patterns. In a symbolic form it will be reduced to some shapes of dots and arrows, nested circles, common forms and structures. Your Functors would become an abstract generalization of shape-preserving transformations, your Kleisli arrows would become an abstraction-boundary crossing transformations, and Kleisli triplets would be an abstraction of a “fork” (two possibilities), etc.

When you nest types you will nest subsets and “decorate with additional tags”, which corresponds to specializing - moving through the layers of abstractions from general to specific. Generic code would be capturing of actual commonalities (of a common structure), and duck-typing (Haskell type-classes) would be the way to capture common shapes and common behaviors (behaviors in Erlang are just generic modules).

Everything is reducible, with some effort, to just a few basic pictures of dots, arrows and nested circles - the universal basic building blocks for a declarative state-machine which can be executed by a universal runtime. It is not just math or logic (executed using a finite set of rules by the runtime of a brain). It is more general (and fundamental) than that.

About that concurrency stuff - the fundamental pattern is that “an event” has to arrive through a different “arrow”, so there is an additional incoming arrow, and what has a distinctly different shape belongs to a distinctly different type. Not surprisingly, async functions belong to its own type-class and even its own distinct Monad. These things are structurally different and “naturally” can be composed only with its own kind, behind an abstraction barrier.

Last but not least, everything is structurally encoded - DNA, language centers of the brain, any AI models, but the basic building blocks are the same.

This is how zooming and seeing works in practice.

Author: <schiptsov@gmail.com>

Email: lngnmn2@yahoo.com

Created: 2023-08-08 Tue 18:39

Emacs 29.1.50 (Org mode 9.7-pre)