ZIO
This project https://zio.dev/ turns out to be a representative example of what is going on in modern software development.
The complementary set of videos “ZIO from Scratch” (can be found at https://www.youtube.com/@Ziverge) is especially nice, and illustrates that the clear articulation of the principles (the reason) behind every single design or implementation decisions leads to a way better code.
Recently https://www.youtube.com/@AndrejKarpathy produced the similar set of “from the first principles” videos, which is a good thing.
So, what is going on there? Well, a lot at different levels simultaneously.
Turtles all the way down
Lets begin from seemingly very far away. A human language has been evolved to communicate information about things in our shared environment. It is naturally (universally) evolved to have different kinds (not classes) of words to refer to different aspects of what happens in the shared environment - “things”, “attributes” and “actions” or “ongoing processes”, which, of course, corresponds to nouns, adjectives and verbs. Again, this behavior naturally emerged because the environment has its constraints and “laws”.
At a higher level of a human language model there are “standards idioms”, which are distinct patterns people lean to use to some recurring observations or in similar contexts.
The fundamental principle is that a proper language use stays close to reality and technically is just an encoding and decoding. At least it should be this way.
Conceptually, very similar things happen in the world of programming language semantics - generalized notions (for a commonly observed patterns) and corresponding standard idioms emerge and we are seeing the same things over and over again at different levels of abstraction.
When we try to at least enumerate the “core” abstractions, we ultimately end up with “arrows” and “dots” and a very few common “shapes” or “forms” made out of these.
At the very essence we can not just enumerate and count all the possible arrows between the dots in a particular arrangement, but even observe that these are all possible arrows out there.
These notions lead to further generalized notions, such that of a “path” (connected arrows) and at the levels of paths, to the notions of reaching the same “destination”, etc.
Being classified and then rigorously defined, these notions got their own names and their common properties got their own distinct names too. This is how we got things like Functor or commutativity or that notion that a diagram commutes.
At the level of programming languages, both semantics and implementation, we have discovered similar generalized patterns too.
Arguably, the most fundamental one is what we call “dispatch” or “branching”, to which various kinds of structured pattern-matching expressions corresponds.
Your CPU does dispatch on a current machine instruction, JVM does dispatch on JVM byte-codes, a LISP interpreter does dispatch on a structure of particular special forms.
The generalized process is, of course. called an interpreter, and this is a Universal Notion (this is why MIT professors wear funny hats on these videos).
Going back to the “core” notions and patterns, it turns out that being too general, like untyped Lambda Calculus/ is “bad” (leads to paradoxes) and being too strict (as a machine code) is also bad (both are the extremes to be avoided).
By trial and error we (the programming language community) has discovered that type-systems (which partitions the universe of values, check the constraints and provide some guarantees) is a must have, and even evolved what seems to be “just right” type system (Scala3 and Haskell are at the cutting edge of this).
Everything is good and true when we deal with a pure mathematical expressions that never fail (and abstract syntax trees made out of those). But there are other “things” that night happen within a computation.
So eventually we extend our basic universal interpreters to the notions of “callback” and “continuations” (which are basically closures that also capture parts of the context), to “fibers” (which ware essentially closures, but with fundamentally different run-time properties - they run differently). We also have “errors” (to which the right notions are “termination”, “restarts” or “back-tracking”).
Having all this we still want to retain the fundamental properties of having pure declarative languages (or at least pure subsets) and the referential transparency property which is necessary and sufficient to implement interpreters of such pure declarative languages.
If everything is done just right, we would have some emergent properties of a whole system, which we also call “guarantees” or “safeties” - “type-safety”, “stack safety”, “concurrency safety” and even “resource safety”. All this just means that everything has been defined declaratively and will be interpreted /“mechanically” by a machine - some simple stack-bases virtual machine, which is an emergent architecture for an generalized interpreter.
All these “nice things” (a pure declarative language and its interpreter) can be packaged as an advanced high-level library for a modern, principle based language with and advanced type-system (which, of course, contains all these things within itself).
The benefits are that we use this advanced host-language with its most advanced type-system to implement our simple /but just done right declarative language. The type-systems then guarantees the soundness and “safety” of our simple languages, in which we can program our stuff.
All these nested interpreters (through JVM up to CPU), at least in theory/ just reduce the level of details within corresponding level of abstraction, while, and this is the point, retaining the high-level fundamental properties and the resulting “soundness” and “safety” at the cost of enormous, catastrophic redundancy.
And this is what the ZIO project aims to achieve. It packages a simple declative DSL and its async-aware interpreter as an advaced Scala library, delegating the /actual implementation of all the really scary shit (low-level concurrency primitives, a less buggy async runtime) to the JVM, in hope that there will be some other people to blame for inevitable failures (JVM itself is an imperative OO C++98 crap, - subject to all the problems we are trying to avoid with out simple declarative DSLs).
Whether or not ZIO does the theoretically-optimal things (uses proper Monads, does not over-abstract with bullshit, etc) is the subject of another long rant, but it is crucially important to realize that there ARE such optimal abstractions - the criteria is that they closely and accurately capture (completely and without redundancy) the relevant aspects of the world (sequences, forks and joins are examples of such aspects).
Higher-level abstractions like Events and Streams and or Channels are “natural” and essential, so it is good when they have their own corresponding pure, declarative DSLs.
Shall we use ZIO2? Well, it grossly simplifies the thinking, adds some “mathematical rigor” and provides the advertised “guarantees” at the cost of running tons of imperative crap. So, definitely YES.
Do not abstract for the sake of abstraction
Unnecessary, redundant abstractions is the root of all evil. Unlike ZIO2, which packages an interpreter of a pure declarative language (mini-Haskell, if you wish), things like ScalaZ or Cats just provide sets of common abstractions (by defining and implementing standard type-classes) which by themselves may be unnecessary and redundant.
The point of emergent standard patterns (which, by the way, are always reducible to the fundamental arrangements of “dots and arrows”) is to recognize them (in all kinds of meaning), not to just make them.
Again, everything reducible to just a few special forms with corresponding visual and structural patterns, of which dispatch (structural pattern-matching) is the most fundamental.
Partitioning (lambdas), Nesting (scoping) sequences (paths of arrows) and Forking/Joining are another universals. The old Lisp guys figured it all out back then, including “fibers”, “flavors” and metacircular interpreters.
There is nothing in ZIO that was not in SICP, (well, except the parameterized type-classes and higher-kinded types, of course).
Look, ma, ZIO!
When capturing of common and even universal patterns ends and over-abstraction and unnecessary wrapping begins?
Well, there is a theoretical result that everything can be reduced to (or /structured in terms of) Algebraic data types (sum-, product- and function-).
Special concurrency types (“instructions”) are being added on top of pure logic via an explicit support in the interpreter (a stack-based virtual-machine). As long as the referential transparency property is ensured, we do not care how these are being implemented, we just declaratively use them - “and then (eventually) do this”.
So, the real question actually is - how much of a /declarative pure-functional DSL, extended with async primitives and advanced ADTs like Fibers, Streams, etc., do we need?
Another question is which Monads
exactly? IO
, of course, and…?
The point is we do not want to re-create crappy parts of Haskell libraries caused by forced to be implemented as pure functions. In Scala3 we can use the imperative features behind an abstraction barriers, ensured by the types system, and this is Whole Point and the “innovation” of using Scala instead of Haskell - having the host language a bit nicer, so we do not over-abstract the implementations.
The essence, however, is to establish and to use the type-system to guarantee the soundness of the abstraction barriers that separates a pure, declarative DSL from its implementation and other Scala code, imperative or not.
How much of it do we need? Well, similar to the Intermediate Representation
which GHC uses - just a bunch of typed thunks sequenced into a declarative
description of an Finite State Machine, which is what it should be - a pure expression (and which is
an output of the Haskell’s main
function is).
The whole point of a lazy declarative languages is to produce such pure expressions (FSMs) which are technically identical to expressions of math and logic.
Everything is defined in terms of the interpreter (the classic definitive interpreters idea) and the soundness of the interpreter has to be guaranteed by the host language (Scala).