There are a few words which are used vastly more frequently than they are defined. Discussions using these words can go on for years, if not millennia, generating more heat than light. “Consciousness” is one1. So is “Capitalism”.
I’ve noticed another: “Bias”.
Discussions of bias are usually animated by the understanding that bias is bad, rather than any account of what bias is. Into this definitional void spill the assumptions of our egotistical age: that bias is a property of individuals, that it manifests at the point of choice, and as a discrimination on some innocent dimension of another person.
This is the prototype of old-fashioned prejudice: no Irish allowed in the guest house, no Black people in the swimming pool, no women in the university. These are the biases of racism, sexism, etc which increasingly stand out as social norms have shifted. Individualised responses to the ‘isms’ ask us all to do the work to decontaminate our attitudes, digging deeper and deeper - perhaps into their unconscious - on this quest.
This sense of ‘bias’ is common in social psychology. Groups, and group identities, provide expectations about members of other groups. Signals of group membership trigger expectations. Bias, in this sense, is socially rich, emotionally “hot”. And unambiguously bad.
From cognitive psychology - the branch of psychology concerned with memory, reasoning and judgement - we get a slightly different version of bias. This is the bias of the “judgement and decision making” tradition, the research programme of “Thinking Fast and Slow”, and which later informed behavioural economics, nudge and popular accounts like “Predictably Irrational”. Bias, in this tradition, has become an ever-growing family of biases: the sunk-cost fallacy, peak-end rule, loss aversion, anchoring, the halo effect, and on and on.
The biases of cognitive psychology are still individual, but they are framed as information processing choices, not social choices. It is also more ambiguous whether cognitive biases are good or bad. Researchers in the area emphasise that they are adaptive - shortcuts and assumptions which are necessary given the uncertainty and urgency of real-world decisions, part of the toolkit that allows us to be smart, flexible, decision-makers in a confusing world.
And yet, the odour of badness still clings to cognitive bias.
Part of this is because the method cognitive psychology uses to identify biases relies on carefully constructing scenarios in which our choices can be shown to be mistaken or inconsistent. This focus led one commentator to describe Kahneman’s account of human nature as “we basically run around failing all the time”.
The truth to this grows as the original research gets translated into other areas, business and popular media; TED talks and self-help books. Biases are fun because they are mistakes and talking about them dangles the promise that we can cleanse ourselves of these impurities (or, perhaps, identify which of our fellows carries too large a stain and cut them out). The title of Kahneman’s book was Thinking Fast and Slow. What is received of that book is often that Fast ( = biased) thinking is the problem. If only we could slow down and let the rational part of our minds have control, rather than the biased, quick, part.
There is a deep reason for the implication of error and failure, stemming from the way bias is defined in cognitive psychology. The identification of bias must necessarily assume a standard against which human judgement fails. Many of the original studies of the field used principles of logic or probability, which at first seem unimpeachable as standards we ought to follow.
As an example, take the Gambler’s Fallacy: if the tosses of a coin are independent, and I throw Heads, Heads, Heads and Heads again, what this is the chance the next throw will be Tails? The correct answer is that the chance is 50%, just as it was for all the previous throws and will be for all the subsequent throws. Each throw has the same odds, regardless of the outcome of other throws. Against this answer, our intuition that a Tails is more likely following a string of Heads shows up as a bias.
The question, though, assumes more than the law of probability. Any real world choice is defined by more than the rules of logic and probability, even as those rules remain absolutely true in abstract. Additional assumptions need to be made to declare that the rule, and only the rule, of independence applies. We must assume that the coin is fair, that the scenario isn’t part of a con, that the description given by the researcher, even, is honest and complete. Against this, we might reasonably ask, how often are events truly independent in real life? And, if two people saw 9 heads in a row and one insisted the odds of Heads on throw 10 were still 50/50, who would you think was the smarter of the two?
As the family of biases grows, clarity diminishes about exactly which rules define the correct choice, and how. Some rules must be assumed in order to define what constitutes a bias. The focus of research is often on the errors made, rather than the standard against which they are judged, allowing researchers to smuggle in assumptions about what defines correct behaviour.
Research on cognitive bias brings to the understanding of human psychology simultaneously a restricted view of optimal behaviour, and an implication that we are all, somehow, mysteriously and inevitably, failing against impossible standards. Researchers don’t say this out loud for the most part, you have to pick it up from the way the research is recycled and deployed in common discourse.
The upshot is that our discussion of bias is often limited by the dominance of these two accounts of bias from social and cognitive psychology. There are various ways we can expand our account of bias - beyond being a property of individual choice, to a property of systems or structures; beyond being the result of arbitrary preferences or tastes; and beyond being indelibly bad. This last point is important, because if you view bias as a contaminant which must be removed, you condemn yourself to try to purify human decision making, a hopeless task which is only possible in the same abstract realm in which laws of logic and probability are defined.
There is another tradition in the psychology of decision making, one which offers an alternative account of bias.
Signal Detection Theory evolved out of wartime studies of radar operators. The Theory characterises decisions as detection or discrimination problems. Faced with uncertain information (fuzzy green blobs in the case of radar operators) the task is to correctly judge when something is there (for radar, an airborne threat). This characterisation has far wider application than just radar: whether you are judging if a food is safe to eat, or a colleague safe to trust, if a news story is true or if the sound you just heard was your name being called (or not), signal detection theory applies.
Signal Detection Theory identifies four possible outcomes for decision, defined by the truth of the world, and the outcome of one’s judgement. There are exactly two ways of being right:
When something is true, and you say it is true: Hit
When something is false, and you say it is false: Correct Rejection
And exactly two ways of being wrong:
When something is false and you say it is true: False Alarm
When something is true and you say it is false: Miss
A fundamental insight is that the possible errors trade off against each other. You can avoid most false alarms, if you accept more misses, and vice versa.
Following from this, any decision maker can be characterised by two parameters - their ability to discriminate (tell true from false) but also their bias (their leaning towards one or other of the two possible error types). Crucially, it is not possible to judge ability to discriminate by looking at only successes. You might correctly call 100% of inbound fighter planes (or spot 100% of lies told to you, or detect 100% of poisoned food), but if you also call 100% of random blobs planes (and 100% of honest statements lies, and 100% of safe meals poisoned) then you are not brilliant at discriminating, you just have an extreme prejudice away from misses and towards false alarms.
To successfully compare different decision-makers you need to estimate both their ability to discriminate and their bias. Each decision maker has a bias - they have to have a bias - which reflects the balance they have chosen between making the two possible mistakes.
What’s the optimum bias? Well that depends on your beliefs about the prior likelihood of the possible outcomes, and on the costs and benefits of the different classes of mistake. You can see this most clearly in the example of a fire alarm - a simple decision making device which discriminates between two cases: fire or no fire.
If the alarm sounds when there is no fire (a false alarm), it is annoying. If the alarm fails to sound when there truly is a fire (a miss), then people may burn to death. The error cost-benefits are not symmetric, making it clear why false alarms are so common. We’ve all cursed alarms for going off and reassured ourselves “it’s never a real fire”, but this apparent unreliability is actually a feature of the bias-tuning which is designed to avoid, at the cost of more false alarms, those fatal misses when there really is a fire but the alarm doesn’t sound.
Considering fire alarms shows a sense in which bias-free isn’t an option. If you build an alarm you have to adjust the sensitivity so it is tuned somewhere between erring on the side of sounding when there is no fire versus erring on the side of not sounding when there is a fire. This tuning is the bias. Your understanding of the costs and benefits of different errors might make you adjust the bias, but there is simply no option in which the system can not have a bias at all (even a bias of zero is a bias, incorporating an assumption that the two kinds of errors are equally costly).
The attraction of the Signal Detection Theory account of bias is that it demands you understand bias as both inevitable and as the outcome of a trade-off. Perfect decisions don’t exist, so you have to pick a holding position. This encourages us to look outside of individual decision and the individual decision maker, beyond individual errors, to try to understand the unseen (the classes of error avoided, the perceived costs and benefits from different kinds of errors).
This perspective also discourages us from imagining there is a bias free decision process, or a bias-free decision maker. This recognition is a first step to principally addressing biased decision making and making it fairer, or more consistent, or more principled.
Bringing this back to discussions which use the word bias without defining it, we have a number of questions we can ask which might move the conversation on.
Is bias used to mean “bad influence” (social model)? If that is the case you might usefully ask where the influence is, is it in the decision-makers at all, or is it in the information provided to them? Too much psychology (and I speak as a psychologist) can lead us to overestimate the frequency with which bad outcomes are due to biased preferences by decision-makers.
Is bias used to mean “flawed” or “irrational” decision making (cognitive model). Then we might usefully ask what standard is being applied, if it should be, and even if the ideal standard is appropriate, is it possible for any reasonable person to meet it.
We all aspire to error-free decision making. It’s a good aim, but dealing realistically with errors means deflating some of the moral connotations and recognising their inevitability. When we come to think about the trade-offs inherent in different classes of mistakes, Signal Detection Theory is an excellent starting point, and will help us think about how costs and benefits feed into what looks like bias.
This newsletter is free for everyone to read and always will be. To support my writing you can upgrade to a paid subscription (more on why here)
Keep reading for the references and other things I’ve been thinking about
§
I wrote about Signal Detection Theory in another context recently:
This paper shows the benefit you get from full formal (computational) modelling of decision processes (including estimating discrimination ability and bias):
Stafford, T., Pirrone, A., Croucher, M., and Krystalli, A. (2020). Quantifying the benefits of using decision models with response time and accuracy data. Behaviour Research Methods, 52, 2142–2155. doi.org/10.3758/s13428-020-01372-w. [web version here].
This paper was never published in a journal, but says a lot more about bias and strategies for dealing with bias:
Stafford, T., Holroyd, J., and Scaife, R. (2018). Confronting bias in judging: A framework for addressing psychological biases in decision making. https://doi.org/10.31234/osf.io/nzskm
The observation that bias is often not defined, or badly defined, is not a novel one. This paper from two RoRI colleagues takes a causal inference approach: Causal foundations of bias, disparity and fairness. Other approaches include legal definitions of bias (which, from my limited understanding, shade closer to something like ‘conflict of interest’), and, from what I’ve overheard, there are approaches within economics which also unpack the various possible meanings (which I’d love to hear more about if anyone knows a good reference).
Ian McKellen performing The Strangers' Case.
Everything is in Shakespeare, it seems.
…And this calls out across 400 years to the current moment:
"And that you sit as kings in your desires,
Authority quite silent by your brawl,
And you in ruff of your opinions clothed;
What had you got? I’ll tell you. You had taught
How insolence and strong hand should prevail,
How order should be quelled; and by this pattern
Not one of you should live an aged man
For other ruffians, as their fancies wrought,
With self same hand, self reasons, and self right,
Would shark on you, and men like ravenous fishes
Would feed on one another."
More from Kottke
Xikipedia - a version of Wikipedia you can doomscroll
You can install as an app (load page and click “install”)
… And finally
via @mhoye
END
Comments? Feedback? Pick a card, any card? I am tom@idiolect.org.uk and on Mastodon at @tomstafford@mastodon.online
Stuart Sutherland wrote in the The International Dictionary of Psychology: “Consciousness is a fascinating but elusive phenomenon; it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.”









![Fact-checked by Grok 2 weeks ago
Kenneth Church
Kenneth Ward Church is an American computer scientist renowned for his pioneering contributions to natural language processing (NLP), information retrieval, artificial intelligence (AI), and machine learning, particularly through work with large-scale corpora and language models.[1][2] He earned his BS in 1978, MS in 1980, and PhD in 1983, all in computer science from the Massachusetts Institute of Technology (MIT).[2]
Sidebar includes "Death and legacy" Fact-checked by Grok 2 weeks ago
Kenneth Church
Kenneth Ward Church is an American computer scientist renowned for his pioneering contributions to natural language processing (NLP), information retrieval, artificial intelligence (AI), and machine learning, particularly through work with large-scale corpora and language models.[1][2] He earned his BS in 1978, MS in 1980, and PhD in 1983, all in computer science from the Massachusetts Institute of Technology (MIT).[2]
Sidebar includes "Death and legacy"](/proxy/resource?url=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_!JIPg!%2Cw_1456%2Cc_limit%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F08e4a6cc-9077-4afd-8ed0-8a14920fef5b_1272x534.png)
![Death and Legacy
Final Years and Passing
In his later years, Kenneth Church resided in a retirement facility in Reno, Nevada, where he lived with family following his move there in the late 1990s.[7][10] He maintained a connection to horse racing through his longstanding role as an ambassador for the Del Mar Thoroughbred Club, which he held into the 2010s, occasionally participating in events and sharing his expertise with enthusiasts.[10]
In early July 2020, Church was diagnosed with pneumonia at his Reno facility and transferred to a hospital, where he tested positive for COVID-19.[7] He died on July 13, 2020, at the age of 90, from complications related to the virus and pneumonia.[7][10] Church was preceded in death by his wife of 63 years, Nancy, and is survived by two daughters, Debbie Anderson and Laurie Kurluk, and a son, Michael; the family requested no services.[7]
Death and Legacy
Final Years and Passing
In his later years, Kenneth Church resided in a retirement facility in Reno, Nevada, where he lived with family following his move there in the late 1990s.[7][10] He maintained a connection to horse racing through his longstanding role as an ambassador for the Del Mar Thoroughbred Club, which he held into the 2010s, occasionally participating in events and sharing his expertise with enthusiasts.[10]
In early July 2020, Church was diagnosed with pneumonia at his Reno facility and transferred to a hospital, where he tested positive for COVID-19.[7] He died on July 13, 2020, at the age of 90, from complications related to the virus and pneumonia.[7][10] Church was preceded in death by his wife of 63 years, Nancy, and is survived by two daughters, Debbie Anderson and Laurie Kurluk, and a son, Michael; the family requested no services.[7]](/proxy/resource?url=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_!jK3X!%2Cw_1456%2Cc_limit%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252Fa01a0b8d-9faa-46e8-8652-d4fd63d66a07_1280x889.png)















