UP | HOME

Models

All models are wrong (over-simplification, over-generalization) but some are useful.

A model is a oversimplified, seemingly “logical” conceptual framework superimposed by the minds of an observer onto vastly complex aspects of reality, too overwhelming with details and dynamic multiple causation for the mind to deal with.

A model is, at best, a useful (usable) approximation to make imperfect judgments.

Linear models (a straight line), averages, weights (coefficients) and weighted /sums are the most obvious examples.

It is crucial to realize that even the simplest models lose information and therefore increase a “distance” from what actually is, so the more complex and more abstract the model is the more it is “removed” from actual reality.

Yes, “zooming out” and taking a “bird-eye view” is a very useful tactics, but one must realize that this “bigger picture” is a results of “aggregated effects”, and is different from what is.

The universal examples are clouds, galaxies and other processes observed from a distance. When one zooms in inside the cloud (of a galaxy) it disappears. Individual water droplets have no idea (so to speak) that they are inside a cloud or related to others.

In other words, an observer might have deduced (inferred) relations that are not there.

However, most of simple models, when applied to the right environments and processes, capturing the real (actual, non-imaginary, non-abstract) metrics provide a useful insights.

The most useful examples are averages and “trend lines” of complex weather patterns, which show (visualize) recurring patterns and hint on the causality. Just annual temperature and rainfall averages describe approximate conditions of a location.

Recurring patterns re-emerge (never identical, never exactly the same) from apparent chaos due to recurring causal conditions. This is the main lemma of the law of causality.

Notice that fitting the lines or simple curves to the data is always wrong because lines and curves does not capture the actual multiple causality and that the causal factors are in a constant flux.

The old factors lose its “weights”, while new ones emerge, new combinations (arrangements) form, the whole system constantly evolves, adapts and even “learns” (the simplest learning is memory of what to avoid).

So, do not take models for reality (you will end up disillusioned and failed). Models are just useful mental tools, and only the simplest models “works”. Complexity of modeling produces sophisticated bullshit.

Lists

Just listing (writing down) things produces the simplest (but imperfect) model. This is, of course, related to the most general notion of a Set.

One might end up with an incomplete list (or a Set) so the model will be inevitably flawed, but in some cases, when what is left out is insignificant or almost irrelevant, is will be useful (better than having none).

Attaching “weights” to the elements, and ordering them (according to the weights or other factors) produce abstractions like priority queues.

Distances

Analyzing “weights” produce the simple (the ones which only work) stats, based on distances from the mean or an actual measure.

Notice that the notion of a distance is at the very core of physical reality.

Proper statistics just capture patterns, never create any abstract bullshit. At least this is how it ought to be.

Disappearing mount trails

Just like mountain trails which begin wide and bold tend diminish and eventually disappear at the heights where is literally nothing, but stones and air (and snow), so are the abstract bullshit theories.

The “lines of thoughts”, which began bold and definitive tends to disappear at a higher and higher levels of abtraction where there is literally nothing.

This is not just a beautiful metaphor. This is how things are.

Author: <schiptsov@gmail.com>

Email: lngnmn2@yahoo.com

Created: 2023-08-08 Tue 18:42

Emacs 29.1.50 (Org mode 9.7-pre)