Haskell
We do not have to do a good third of what Dijekstra did by hand with his imperative code - the compiler will guarantee the absense of simple bugs (which no tests could do). Althrough no compiler could ever prove correctness.
We also do not have to consider the imperative aspects of a hardware-based model. We just do pure calculus in Haskell.
A big step ahead from Ladrin and Dijekstra.
Why
Because it is, surprise! the most pragmatic language for high-level, domains, modeled using mathematics.
The really big idea, which has been popularized by Laslie Lamport, is that one has to model (specify) aspects of ones’s domain in pure math, which then could be mechanically checked for correctness (lack of contradictions).
We need just Sets (the notion of such that) and First Order logic from math, discarding the assumption that False implies anything.
We need variable bindings, conditional expressions, and the function abstraction, of course. This is, basically, what TLA syntax is.
However, math is untyped, or, at best, weakly typed using sets or abstract structures, such as fields, rings and so on, in other words it is structually typed.
One could use specialized languages with has a build-in prof-checker,
such as coq
or Agda
, and sometimes it may be reasonable, but one has
to rewrite the solution in an “implementation language”.
Haskell is a pragmatic choice because once the solution compiles and runs it can be used in production, but this is not the main point. The crucial thing is that the code is pure (as pure mathematics) and could be evaluated with a pen and paper using substitution model, just like math.
Equational reasoning enables one to write simple (but actual, valid) proofs about functions on domain-driven types and even synthetize some code from proofs.
There is no need for another specialized language, because Haskell
technically is a pure logic expressed as highly syntactically sugared
and well-typed polymorphic Lambda Calculus (System F Omega), augmented with
standard machine types (such as Int
, Char
or Bool
).
Algebraic types and universal pattern-matching makes actual boring programming fun again.
Math
The worst code, second only to J2EE, is when amateurs without any mathematical background are writing Haskell. They are trying to write the same imperative crap they used to write in Java or PHP, but in Haskell, so it become J2EE spaghetti with Monads everywhere. After no one could read or understand it, of course.
The way of writing decent Haskell is the same way as writing mathematics.
Haskell is declarative by its nature, in the first place, so one always writes what is (types) and what to be done eventually, just like a damn long single conditional expression (actually - a state machine).
Yes, one writes a description of a state-machine (an expression which
main
returns) to be executed by the runtime, in a pure math (well,
polymorphic lamda calculus).
Another plague are unnecessary, redundant abstractions which virtue signaling amateurs write to show off their presumed smartness.
Richard Bird spent decades perfecting his style, which is basically point-free function composition, pattern-matching and algebraic data types (which include variants and records), which, as you might have realized, is the very essence of everything.
Haskell has to be written in the Bird style. But most of current coders have no idea what it is and why it is so special.
Most of Haskell projects ended up as an unreadable spaghetti of imperative crap expressed in a pure functional language with strictness “optimizations” and unnecessary, redundant abstractions everywhere. .
Pure functional
The result of any function call is fully determined by its argument. Same input - same output. Always.
Technically, every function has exactly one argument and currying (nesting of closures which capture their arguments) is used to construct multi-argument functions. Partial application naturally follows.
Like in mathematics, any function call can be replaced by the result of a previous call with the same parameters. This property is called the referential transparency principle and is at the core of pure mathematics - substituting a value for an expression to which it reduces to.
Equational reasoning follows, based on the premise that two expressions at each side of the equal sing are indeed equal, and can be substituted for one another (substitution of lets and lambdas employs distinct special rules).
Those principles are used by the compiler, which re-arranges and re-writes expressions without breaking their semantics - pure mathematical transformations (which, in theory, can be done with a pen and paper).
These properties are common for any pure-functional subset of a mostly-functional language, such as Standard ML, or Ocaml, or Erlang or Scala.
Haskell is special in that it uses the call-by-need evaluation strategy (or a Normal Order of evaluation, as intended by God. Or, at least, by Church). The original Lambda Calculus and most of systems of logic use same strategy for application of a set of rules for reductions.
Remarkably, Haskell uses the System F Omega under the hood as compiler’s
intermediate representation or IR, so, technically, it is a pure
logic until compilation of main
ends and execution by the runtime
begins.
This means, among other things, that only so-called “Domain logic” or domain-driven core types should be implemented in Haskell (and it is indeed the best possible choice - to define your domain in pure logic) and everything else should be degelgated to the system by FFI’ing or using calls to an OS.
Otherwise one will end up with something like Eterprise Java (J2EE) with Monads and unnecessary, redundant abstractions based on esoteric stuff, which is what we already have.