UP | HOME

Lambda Calculus augmented with an advanced type-system

The results of the Lambda Calculus is the starting point of proper undestanding. Just

are enough for everything.

It is unclear whether or not John McCarthy has been inspired by the Lambda Calculus (looks like he was more into structures), but the language Scheme definitely has been built from this universal foundation.

The referential transparency property, which follows from immutabilioty of all bindings (both symbols or data), which the Lambda Calculus implicitly posses, is what makes a language pure as math or a symbolic logic.

This “purity”, in turn, means purely declarative and timeless (no notion of time). We declare what exactly has to be done eventually (at runtime), just like mathematical expressions defines what to compute.

This means “talking about values”, but no actual value can be “seen”/. The only way is to declare how to /scrutinize it (using pattern-matching).

There is no commands or statements. A whole program is one single expression (of a well-types state-machine, to be eventually evaluated by the runtime).

So we define functions and expressions (everything is an expression) using mathematical equations. This is what we do.

Again, the pute expression which the main “produces” is a declarative desctibtion of a state-machine, which the runtime will evaluate.

Just 3 sequential “patterns” (one input - one output), so everything /composes:

Each expression evaluates to a single value and these are passed to curried functions.

Flow-charting and circuit-based view are crucial to realize the universality of arrows between dots. The Category Theory goes deeper (with very limited results).

Nested functions, all the way down to “primitives” (of the compiler runtime and the standard library).

Nesting defines an implicit order in a “lazy” language. Otherwise the particular order of evaluation does not matter.

Since every expression evaluares to a particular value (same input - same output for lambdas) we can always substitute an (not yet evaluated) expression for its (eventual) value, and vise versa.

Not just functions, but functions defined by ordinary equations (just like in the pure subset of SML or Ocaml).

Local bindings (which are expressions) with let and the where clauses (back from Ladrin66) and proper shadowing.

Both are just a syntactic sugar for ((lambda (x) (* x x)) 2) – applications of in-place lambdas.

Nested scopes (frames of the environment). The “free variable” (defined somewhere in an outer scope).

Lexical Closures capture all the values of free variables (not to be confused with mathematical closures).

Compound data in the form of structures, including the recursive ones is at the higer level of abstraction. “Prinitives” are being used.

ADTs, /type-classes and modules (with export restrictions). No ML modules, unfortunately.

Cardinality of a type

Associate a type with cardinality (just like of a Set) How many inhabitants (members) a type has. Ignore bottoms (they are relevant only in type-constraint solving)

data Void -- 0, no elements, just like The Empty Set

data () = () -- exactly 1, exactly this - no new information

data Bool = True | False -- just these 2. notice the |

data (,) a b = (,) a b -- a type-constructor, parametrized by a nd b

Shapes or forms

Notice that these types have distinct, unique shapes

  • | stands for an alternative or a possibility - OR ELSE (a “fork” - this or that).
  • , stands for AND ALSO (pretense in the same locality) (a “join” - this and that).

Notice that

  • only one “branch” of a “fork” can be taken (observed).
  • an aggregate (a compound) is fundamentally different (has its own unique /properties) from its individual parts.

Total

Knowing cardinalities of any two types (sets) we can know how many “arrows” (individual relations or distinct mappings) are there in total.

When we try to visualize (or formally define as a directed graph) the totality of all the possible relations we will notice that the resulting shapes (abstract structures) are the same (same number, same direction).

And this is it. This does not imply anything more than that - just similarity in its totality - a purely abstract notion.

The notion of duality based on cardinality alone is wrong and misleading.

Isomorphism

Any two types of the same cardinality could be viewed as isomorphic. This is a naive and too abstract to be useful assumption.

The shape (the form or structure) of the type is its distinct feature, out of which its properties arise (just as with molecules).

A fork is not isomorphic to a join in principle, but it is viewed by sectarians as such.

Definition

The formal definition is this

  • there is a pair of total functions between two types (sets)
  • composition of these two functions is an identity (a round trip)

When two types (sets) has the same cardinality, an one-to-one correspondence (a mapping) could be defined in principle as a pair of total functions.

As long as cardinality is greater than 1 a particular ordering has to be defined (become a part of an abstract mathematical structure).

An ordered set of arrows

Individual “arrows” (relations) between values has to have a particular order, and each “arrow” has to have a unique inverse “arrow”.

The meaning and validity of such functions does not concern mathematicians.

Symmetries (rotations) could be formalized with such functions.

n!

For any two types (sets) of the same cardinality n there are n! unique isomorphisms between them.

Same cardinality does not imply the same shape

Knowing that an isomorphism exists is enough. An isomorphism between types s and t is a proof that for all intents and purposes, s and t are the same thing.

This, of course, is bullshit.

The cardinality of a sum-type is the sum of the cardinalities of its constructors.

The cardinality of a product-type is the product of cardinalities.

It is said that these are duals, which is, again, bullshit.

The sum- and product- types have different shapes.

Monoidial

() is a Monoidal Identity element for product-types (no information). Void is an identity element for sum-types (Void has no members).

Exponentials

The type \(A \rightarrow B\) has cardinality \(B^{A}\) .

For every value of \(a\) in the domain, there is a \(b\) in \(B\). We can chose any value of \(B\) for every value in \(A\) so it is \(b \times b \times \dots \times b\) exactly \(a\) times.

Stuff

described a system with equations that determine how its state evolves with time, where the state consists of the values of variables (a context)

We can pretend that its state changes in discrete steps. This is sampling of a continuous signal.

We can think of a state as representing a potential state of the entire universe. This is similar to \(R^{3}\) which defines every point in an abstract 3D space.

So, we represent the execution of a system as a sequence of states (discrete steps).

Formally, we define a behavior to be a sequence of states, where a state is an assignment of values to variables.

A behavior could be viewed as a log of what just (already) happened. It is good idea to log each new state.

A step is an ordered pair of adjacent states, while the last one is the tip of an arrow.

Such sequence of steps (or arrows) is usually called a path.

An invariant P is True (unchanged) \(\forall s \in S, P(s)\). For a directed sequence we could use induction.

Mangle function definitions with lots of boilerplate purely to thread the state through (using an explicit parameter or implicitly - inside of a Monad).

The ability to separate what is computed from how much of it is computed is a powerful aid to writing modular programs.

What is in an function signature and its formal specification stated in the comments. How is in the function’s body (an expression).

Constitutionally separates (at the type-level) pure and impure (IO) functions without encumbering the syntax of either.

Other “side-effects” are separated in its own Monad (the aspect of an abstraction barrier and the aspect of implicit sequencing).

The type-class of Monads is defined as an ADT, so every particular instance of a Monad is an ADT in principle.

The compiler simply refuses to build a program which incorrectly mixes pure and impure functions (trying to apply to a lifted value, etc).

Monads are as general and important as parenthesis They are “membranes” - type-level abstraction barriers Monadic Laws are of a Monoid (associativity, identity)

And a Monoid captures operations with properties of composition and folding

As an abstraction in context of FP a Monad generalizes an abstraction barrier (by being an ADT).

In the context of Haskell, it is just a type-class (an implicitly an ADT).

To be (an instance of) a Monad is to implement an interface which consists of a couple of function signatures which, in turns, confirm to certain algebraic laws.

Several definitions - in terms of <=<, of join and return and bind and return.

the fish <=< is a composition operator, like ~ (.) ~ - \[f \circ g\] fish facing in the opposite direction is |> (the pipe operator).

join is a “hack” (flatten for lists of lists).

fmap of f over g is a composition - \(f \circ g\) - f after g - an arrow

Before, the puts function took a world and a string, then returned a changed world. Now, it takes a string and returns a world-changing function, which must eventually be run. (this!)

It’s like Unix pipes, but instead of transforming text, we chain together functions that transform the world.

In Haskell, there is a special kind of value called an IO action.

When a Haskell system evaluates an expression that yields an IO action, it knows not to try to “run it”, but rather to ask the runtime system to “take the appropriate action”, eventually, when performed.

Everything impure happens after main returns.

Still declarative - please, do this “action”, eventually. the value which main returns is still a pure logic.

It delegates the actual running of actions to the runtime.

{-# LANGUAGE DataKinds #-} The DataKinds extension allows us to promote data constructors into type constructors, which also promotes their type constructors into kind constructors.

To promote something up a level, we prefix the name with an apostrophe, or tick

type creates type-synonim, no new data-constructors

[http://h2.jaguarpaw.co.uk/]

The normal form is a simplified version of Haskell with fewer language constructs. It has only

  • literals
  • variables
  • constructors
  • let
  • lambda
  • function application
  • case

That world doesn’t speak in product and sum types, but in streams of bytes, so there’s no getting around a need to do some parsing.

maxims

  • make as much of your logic pure as possible
  • avoid naked primitive types (String255, etc.)
  • make invalid states unrepresentable (NonEmpty, NonZero)
  • make flat hierarchies with composition
  • parse, don’t validate
  • Keep your types small

try to figure out what its domain-specific types mean.

start figuring out a library by figuring out its types.

rich type system lets you model the domain precisely and in a complete way.

  • no nulls;
  • errors and the possibility of returning nothing are represented explicitly.
  • algebraic data types model alternatives easily.
  • well-implemented generics (polymorphism) and typeclasses means more reusable code.
  • side effecting code is clearly separated from pure code at the type level.

it’s possible to write functions that need no validation done on their inputs.

able to use your function parameters immediately without having to do any null checking at all

it’s easier for the same developers to return months or a year from now, or new developers to arrive, and gain a good grasp of what’s happening in the system.

parses lists into non-empty lists – Returns a meaningful value to be composed with parseNonEmpty :: [a] -> IO (NonEmpty a) parseNonEmpty (x:xs) = pure (x:|xs) parseNonEmpty [] = throwIO $ userError “list cannot be empty”

a parser is just a function that consumes less-structured input and produces more-structured output.

By its very nature, a parser is a partial function—some values in the domain do not correspond to any value in the range—so all parsers must have some notion of failure.

The common theme between all these libraries is that they sit on the boundary between your Haskell application and the external world.

do … fmap f e

can be rewritten to

fmap f $ do … e

and

case x of Case1 -> f $ body1 Case2 -> f $ body2 to

f $ case x of Case1 -> body1 Case2 -> body2

Author: <schiptsov@gmail.com>

Email: lngnmn2@yahoo.com

Created: 2023-08-08 Tue 18:31

Emacs 29.1.50 (Org mode 9.7-pre)