sklansky's theorem: fundamentally flawed (iv)
Though entirely proper to test, disprove, any theorem, perhaps, the most pertinent inquiry of Skalnsky’s is to ask: is it useful?
The general interpretation or use of the theorem appears in the validation of decisions through hand playbacks with opponents’ cards face up: if performed the same way, the boy done good.
I’ve personally found the process to placate rather than inform. After bad-beats, I’ve consoled myself with: ‘well I’d have played the same if I’d known what he had’. Such thinking is quixotic, since, although the strategy, face-up, might be seen as perfect; face-down, it could be dire. As poker players, we seldom deal with certainties.
On occasions where a decision is marked retrospectively correct in this contrived manner, but on the (judgement-based) balance of probabilities at the time, deemed wrong, applying hindsight-analysis will likely mislead and so hinder learning.
A conflict, of sorts, arises when we witness a respected, more informed player deviate from our selected path or option. We are, instinctively, compelled to re-evaluate, coaxed, perhaps, into presuming we were in error and tempted to mimic (or migrate towards) the play [1]. However, this can resemble the above trap - his balance of probabilities, worldview (not to mention image) contrasts our own; a superior player is found, typically, to be better informed, in the same sense, but to a lesser to degree, as someone privy to hole cards: if our tools, skills, are different then so must be our models, and our answers [2].
Consider an amateur weatherman using an historical 3-day forecast algorithm [3]; light showers are predicted, at a low-confidence level, for the following morning. The amateur weatherman baulks at the prediction and forecasts a dry day. Later that evening, after submitting his prediction, he listens out for the Markov Amateur-Weathermen Society’s forecast, which utilises a sophisticated 10-day algorithm: a clear day is predicted – with a high level of confidence. The day was indeed free of rain. So the amateur's forecast was correct, but the decision? With better tools, more information, the weatherman will, perhaps, claim to have reached such a forecast anyway; however, contradicting his basic algorithm, whimsically, is clearly gambling against the odds, which, ultimately, is a strategy, destined to under-perform.
Of course, inevitably, as we develop, the cart will, on occasion, precede the horse - we are seduced into reproducing the effects (plays) of better models, so our model resembles it, appears similar - to kid ourselves we’re improving; nevertheless, in the poker world, one might argue extrapolating this way, persisting in (based on the current model) erroneous decisions, may render a speedier, albeit costlier, progression - at least for some.
To conclude, it is, surely, advantageous to measure our decisions based on our interpretation of the information gathered, not what wasn’t, or couldn’t. The value in ascertaining the correctness of a decision based on a near utopian-level of awareness is far from apparent.
Next article
[1] Which of course can edify (& so develop the model), but also regress if applied without the necessary insight. E.g. value-betting weak hands.
[2] Obviously, not on every occasion.
[3] i.e. it uses the previous 3-days weather to predict the next.
<< Home