UP | HOME

Stroustrup

Fractals of bullshit.

Recently we have been blessed with a new piece of Holy Gospel from the Creator:

https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2739r0.pdf

Ignoring fundamental (different from merely theoretical - which means imaginary only) findings is a sing of insanity or even idiocy.

We know that all of mathematics “works” as it does only due to the (necessary and sufficient) referential transparency property, which means that what has been defined (and bound) stays the same (forever), including intermediate results, which too has to be proven correct (every single step) and immutable.

Do no have this property - do not have mathematics and its un-ambiguity and correctness.

Joe Armstrong came up with to the same conclusion at a different level - mutation and concurrency (arbitrary access) do not match, which, it turns out, just the implications of the above necessity.

Even biological systems “built on assumption” that DNA and mRNA structures are “stable” which technically means immutable.

Explicit, non-leaking abstraction barriers, “in-wire” values, pure functions and state-less interfaces were the building blocks of Erlang. All this works in hardware (and in biology, which is a hardware).

How can we use C++, which tries to formalize arbitrary assignments to arbitrary memory locations, so that lowly paid people without classic education and without understanding of the fundamental principles and idioms can program fast (even agile), without bugs and the need of constant maintenance? I will leave this as an exercise for the reader.

Both Erlang and Haskell guys came up with the principle of separation of pure and impure code using sets of interfaces (whole modules). This is, of course, not a coincidence.

In mathematics we capture (abstract out) recurring patterns, while abstracting away all seemingly irrelevant details of “physics” (locality, distances relative motion, etc).

We form closures, which are the means of “capturing” (the early LISP guys, being influenced by both Lambda Calculus and the discovery of the structure of DNA, knew the importance of closures).

With the set-theoretic formalism (with some parts from the Category theory) and logic (which both are formalizations of different aspects of reasoning by the mind of an External Observer - classification and of the underlying rules or “laws”) we can analyze captured patterns (closures).

And this is what mathematics is all about - we capture, define and then analyze.

It all “works” because everything captured stays the same (closures and resulting derivations are immutable). Losing this invariant will destroy mathematics and logic (in logic previous true assertions are immutable).

Going all the way down to molecular biology - building block are better to be immutable (or at least stable-enough to provide all the stable intermediate forms).

Mathematics is based on this very principle - immutability of what is captured and of the previous proven results.

This is where the second-order principles of immutable data-structures are coming from. Once whatever you already captured change behind your back you are fucked. Change of what has been captured is the same as destruction.

Now what are the interfaces? Well, it is, aside from being generalization of cell membranes, is that abstraction barrier of a closure, which separates it from the rest of the conceptual and linguistic bullshit.

We literally bring a closure into the realm of pure, rigorous, formal thoughts and understanding - into the environment of an evaluation engine (the MIT Scheme guys again).

There is a big problem, however - If we capture bullshit, the outcome will be bullshit.

  • If we miss out a single relevant factor we captured bullshit.
  • If we used an imaginary entity as real we are fucked.
  • If a relation does not exist, everything that uses it yields bullshit.

And so on. The early logic guys figured it all out long ago - the results must be “just right”.

The early FP people came with many fundamental results, including the principle of having at most one mutable reference and passing it along. Such restrictions will preserve the referential transparency and all the properties that follow from it.

So, the partial answer is to use the right “patterns” and idioms, not just arbitrary imperative destructive assignments of everything to everything. Restrictions and partitioning (and communication) are universal and necessary.

Haskell guys do it in practice (at the cost of explicitness and over-verbosity). They also do reinforce (at a type-level) the actual abstraction barriers, which keep their code partitioned and as pure as math or logic.

How about the impure computer? Well, the main function of any haskell program is a pure declarative expression - it is a declaration of a type-safe abstract state-machine to be executed by an impure runtime on a dirty imperative computer (just as out pure math and logic is evaluated by dirty biological meat of the brain).

Back to Stroustrup. Can arbitrary assignments (to a mutable locations) in time by more than one “thread” be formalized correctly? No. They cannot in principle (see Joe Armstrong).

Can arbitrary nested arrangement of pointers (which themselves are mutable and live in a mutable memory locations) be stable intermediate building blocks? No. In principle.

This is what all the copying and moving-semantics bullshit is about. How can we formalize arbitrary mutations and movements of numbers relative to an address space? You just cannot.

How does everything work inside a computer, then? Just like in physics - as long as nothing really happened to a temporary arrangement.

So, what are the answers? Well, use restricted, pure subset of any language until you cannot. Then partition the code and completely isolate (using interfaces) the crap and make the whole thing reverentially-transparent (again) - create a new cell, lousely speaking, and communicate with it asynchronously, using pure messages.

These are not just Haskell idioms, these are the universal principles (upon which all of the actual “molecular biology” has been evolved).

The bullshit around C++ is based on the premise that a “universal general purpose imperative language” could exist, leave alone be formally defined.

No, imperative is a no-go, and this is not just a fancy theoretical finding, this is a universal principle.

Lots of people come to the same conclusions from different directions, because their observations and conclusions just converge on What Is. The intermediate forms must be indeed stable, otherwise we have only illusions and hand-waving.

This is what Stroustrup does - he insists on a fundamentally wrong thesis by waving his hands before idiots.

Author: <schiptsov@gmail.com>

Email: lngnmn2@yahoo.com

Created: 2023-08-08 Tue 18:39

Emacs 29.1.50 (Org mode 9.7-pre)