UP | HOME

The Man Who Solved the Market and other bullshit.

The main theme was “we use advanced mathematical tools and we don’t care why it moves”.

This is the utter first bullshit in a row. Ignoring “time in the market” is ok if one just observes the ongoing processes (always after it happened). But understanding of what is going on requires the whys.

The Burry legend (when he foresav “time the market” and timed it too early) has been based upon undestanding of the whys and even constantly searching for better and new whys.

In short, the whys (social dynamics) lets one properly assess long-term “timein the market” (the “tides”), while just candles, moving averages and their derivatives allow sometimes “timing the market” successfuly.

The analogy to poker players, when one does not know the cards of the opponents but observes their mood swings (sentiments), thus having an imperfect and noisy information is a good one, but it is definitely not enough to be consistently profitable.

So, the sentiments matter a lot, but they are not enough and it is unclear how to measure them correctly, except to have an access to all the positions on an exchange (FTX, SBF).

But, yes, if one could reliable measure the most relevant variables and observe the changes of the rate of change (second derivatives) then, yes, explanations of why are not required in this context.

The same thing applies to “indicators”, such as RSI and MACD. They capture and show the “speed” “acceleration”, “pressure” and “momentum” of market processes, but it does not (and cannot in principle) predict the next move. Even conservation of a momentum does not hold in the market.

Another principle is that observed (calculated) correlations between variables are ephemeral, they come and go. What seem to be correlated this moment will not be so the next.

The environment is dynamically changing behind your back and right around you while you are observing what is already happened.

This fact implies that any model is already old the moment it is created.

Models has to be constantly updated to reflect the constant change, which is an impossible task. One cannot do it consistently.

On bullshit

I do not talk “abstract”, only properly generalized patterns (and corresponding mathematical abstractions, like Sets or Semigroups).

There is a fundamental difference between an “abstraction” and a “properly generalized pattern”. The later can always be traced back to the actual underlying pattern - to What Is, while the former is usually just conceptual bullshit, like gods and angels.

It is crucial in any “modeling” endevour that all your concepts can be traced back to “reality”. Otherwise it is just abstract bullshut, like Marxism or Freudian “psychology”.

We have to really understand things like “generalizing to multiple dimensions” statements. This means that a certain methodology or functions can be used in novel, more abstract contexts, just like looking at the sign of a derivative to “see” the direction of a slope.

What it yields – what is the meaning of the results – is the most impoetant question. It may be just a multimplication of birds by leafs on the trees and taking a partial derivative with respect to the leafs.

What wasn’t bullshit

  • Bell Labs style “free” teams staffed with talented guys of different specializations. Both imaginary military institution and trading firms allegedly succeeded due to this universal social arrangement. We cannot hire a top talent, so we have to read the books written by top talent.
  • There is not so much information about the price and volume movements. Moving averages and trend-lines, along with support and resistance areas (edges or a current range) is basically all that is out there. What everyone does is trying to apply relevant mathematics to these “emergent curves”. There is not so much non-bullshit mathematics - slopes, second derivatives, partial derivatives, back-propagation, etc.
  • The most “stable” strategies were based on reactions to anomalies (the classic “buy the dip” and “short the blow off top”). Anomalies frequently emerge as direct results of underlying social dynamics, which are a panic sell off and a FOMO euphoria.

    So there are actual, real causality and the famous “market cycle” diagram, which maps major psychological states to a generalized pattern captures and approximates something real.

The Fundamental principles

  • Nothing, in principle, will yield consistent, accurate predictions.
  • No game-theoretic bullshit will be applicable to a stochastic environment.
  • All “estimated” probabilities will be wrong, in principle.

Bad or wrong estimations is worse than no estimations at all (just being reactive instead of predictive).

Data

Having the accurate data is, of course, way better than having no data at all. But only price and volume (candles) is, not enough, because it is recording of events which already happened, in principle.

The key is to have the data which are measurements of different aspects of the market social phenomena, including “open interest”, “long-short ratios” and other sentiment assessment.

Exchanges collect and monitor all this “insider” information about the positions and use it to trade against retail (FTX, SBF). This is the hint about what is relevant.

And, of course, mass hysteria like shitcoin moves after Elon’s twits cannot be predicted by have to be traded on “regression to the mean principle” (reacted on).

Measurements

We need instruments - speedometers, pressure-meters and other “peters”. The resulting dashboard should be available to the algos.

The classic “indicators” of the technical analysis are exactly that. They are the “instruments”.

It has been showed innumerable times that they along are not enough to be consistently profitable and even to avoid loses (stop-loses are for that).

So, neither data not “indicators” alone are enough.

The Simplest strategies

Super-imposing a grossly oversimplified virews (“models”) onto vastly complex reality have worked since the beginning of time.

Back propagation-based Deep Reinforcement Learning

This too requires a stable environment, so gradual refinement of the weights could occur and the whole structure (network) “converge” to something.

When the environment is changing the inputs will become “random” and the network would learn inconsistent nonsense or being trained on a noise.

Small individual networks, however, could be trained to catch recurrent patterns on 3, 2 or even 1 candle, where the structure of the network could reflect all the possible variables and ratios among them.

Then the “consensus” of the multiple specialized network may provide better measurement, detailed snapshot of the market.

Again, the notion of not understanding why a model does this or that, what it “sees” is an ultimate dead-end, because models mostly “hallucinate”, just as LLMs do.

Author: <schiptsov@gmail.com>

Email: lngnmn2@yahoo.com

Created: 2023-08-08 Tue 18:41

Emacs 29.1.50 (Org mode 9.7-pre)