People's judgments can even be influenced by what books they've read lately. It sounds crazy, but I've noticed this in myself and I doubt I'm alone.
If you have just finished reading To Kill a Mockingbird, you're probably likely to be forgiving. If you've just finished Lord of the Flies, you're probably likely to see people as ruthless and be less empathetic/gentle/forgiving. "It's a dog eat dog world."
Reminds me of something I've also noticed about axioms and old adages: they often conflict with each other. They're never true in any absolute sense. All adages are relative - they're designed to keep you away from extremes and rigidity. For instance "shoot first, ask questions later" and "only fools rush in" are both perfectly true and useful, but only some of the time.
In my own review of *Noise* I concluded, "I think that the strongest conclusion that one can draw from KSS is that computers will continue to take over for humans in more fields. The project of reducing noise amounts to asking humans to behave more like machines. For that purpose, computers have the edge." https://www.econlib.org/library/columns/y2021/klinghumans.html
I thought this was intriguing but uncertain until I checked back who wrote it, and then I viewed it more favorably. Apparently I don't tend to read who the comment is by unless I stop to comment positively or negatively. You have proven yourself to me in the past, all the way back to Tech Central Station days, and I have quoted you for almost twenty years.
As I read your review of Daniel Kahneman’s book on flaws in human judgement, Rob, I thought about the “noise” comment platform features create. Often the comments seem to be like sheep, blindly following the crowd off a cliff! But more seriously . . .
Becoming aware of the bias or noise that can cause human error is helpful. And this book seems to do a good job of building a case for AI to build better algorithms/rules to predict the future than humans because they are less biased or distracted by noise.
That’s great to maintain the status quo, but isn’t human progress about breaking rules? Historic human breakthroughs are surprising. They couldn’t be predicted by rules of thumb because they made those rules moot, transcended them.
PS Happy Easter or the celebration of miracle of your choice.
Thank you for identifying this book. Just added it to my Kindle list... A VERY LONG LIST!
In my long corporate career where I ran the IT department and corporate project office, boy oh boy did I see this general theory confirmed... and still do today. When I get teams together to discuss issues, policies and problems, and seek their brain power to help contribute toward a more comprehensive and hopefully optimized solution because, I as the primary decision maker, don't hold the collective knowledge and judgement of that which comprises the room... I am usually disappointed with the results.
People in these group settings are beset with strong emotions over status, appearance, reputation. It takes a game plan to knock that crap out of their heads and get them into a process where they feel safe to speak their mind. But then too much safety ends up with junk input as the conversation gets hijacked by some of the idiots in the room.
Now increase the population through the collaboration of technology tools, and the mess explodes.
We thought that the Internet would be the great equality-maker by opening up the world to information access... and instead it has fueled mass psychosis and tribal group-think.
One of the reasons why I *imagine* this book didn't produce as much interest (beyond people who are immediately following the research) is a shift in the general mood of "the audience" (readers in the population at large). I somehow intuitively believe that until not all that long ago, maybe the early 2010s, people generally would have supported (affirmed) a statement along the lines of, "the more we know about how people think, the better we can act." These days, I somehow sense that people have become much more aware of the dangers of "reverse psychology," by which their own behavior becomes subjected to the control desires of "Nudgers," people who understand psychology well enough to "get people to do what they want" (rather than what the people themselves want). For me, the initial popularity of Nudge followed by a sharp (and difficult for the seemingly benevolent mainstream of intellectuals to swallow) rebuke of that approach (Brexit and Trump's success come to mind) is the consequence of average people developing some kind of "mental immune system" agains "the Nudge." If other people have the ability to "decode" my behavior and present me with stimuli I have little freedom but to respond to in ways according to those other people's values, maybe I am beginning to think far less favorably about their ability to decode my behavior...
Separate comment. Kahneman I mostly trust, but Sunnstein's Nudge did a lot of comparing apples to oranges to get to the conclusion he wanted about nudging.
I can see the aversion to adopting an algorithm instead of one's own judgement. But if you told me "This is the algorithm we have built out of your previous judgements, and we were able to go back and eliminate what we think were your worst days, when your appraisals were more random and had more insulting language" it might be different. If I looked at this and thought "Oh, this is above average AVI," I might sign on. If the algorithm was Bayesian and kept ratcheting up in ways I found convincing, that it more subtly captured bad days and had made a start on cracking the code for what seemed to be my wisest days, it might not be long before I gave it blanket approval.
Then all we would have to worry about would be who is minding the store. A different problem, but likely worse.
I really enjoyed this review, thank you Rob. Whilst I respect Kahneman's critics of his studies, I still respect all that he has taught us about decision-making - much more insightful and helpful than Nudge theory by Thaler. Nudge has been exploited most terribly during Covid-19, as a result of which, I've lost all respect for behavioural science, as much as I was fascinated by it previously, actually I am ashamed of it. But cognitive deficits are always useful to know - those within us and around us.
"If you tell people they can no longer rely on their gut feelings and that they must follow a checklist or abide by an algorithm, they will respond with resistance because such policies inhibit their ability to pursue their own hidden agendas. "
Algorithms have their own biases built in by virtue of training data and the way the code is written, but bestow the added benefit of freeing the controllers of that algorithm from accountability for those biases. Nothing is perfect.
Nowadays we are all bombarded with so much more ‘awareness’ about considerations when making decisions that perhaps it accounts for the reason models/algorithms do better in the recent past with a few simple rules. I have wondered if the reason anxiety and depression are so much more prevalent in our age is now we weigh so many factors and if you are like me, sometimes you just throw up your hands and make a gut decision because your head hurts. And then worry about it. Used to be (and be better in my husband’s view), you steal a horse and this afternoon you are hanging by your neck from a tree limb. Very simple algorithm, horse thieves knew this but did it any how, the party doing the hanging did the job, dusted their hands off and went to the saloon for a few drinks and hands of poker then slept like a baby. Harsh maybe. But the lines were clear. Now we worry about if the thief had a pa who beat him, stole the horse because he had been bullied in school, the fact he only beat up the person he stole it from rather than shot them, the horses mental health, especially if he was the vehicle for hanging the thief, etc. Complexity increases noise it seems to me. And we are swamped by complexity and noise. And not sure if models do a better job when modeling complexity like climate or the weather come to mind. Perhaps the take away is knowing what situations call for simple and which for complex. And accepting that tradeoffs are a feature not a bug of decisions, can’t make a perfect decision that doesn’t prioritize one or a few things and discount a slew of others.
Thanks Rob for such a great in-depth book review. Noise has been added to my reading list.
As a pharmacist, I agree with noise affecting medical decision making. This is why many medical associations use diagnosis and treatment algorithms created from evidence-based research. They simplify the process and help eliminate noise.
Are Republicans harnessing noise to further their fascist agenda? I feel as if Trump supporters are all noise these days. They rely on lies, propaganda, and misinformation that can be easily disproven. Yet, they seem to be lost in the noise, i.e. the madness of crowds?
Very good article indeed. In times of much "noise" an unbiased review of a outstanding book is much needed and appreciated. Two centuries ago, the man who understood this topic was the Marquis of Condorcet. Being a mathematician, in 1785, Condorcet published his "Essay on the Application of Analysis to the Probability of Majority Decisions", one of his most important works. The best proof of his theory on masses being wrong, is the fact of his demise during the French Revolution. More to read on Wikipedia:
People's judgments can even be influenced by what books they've read lately. It sounds crazy, but I've noticed this in myself and I doubt I'm alone.
If you have just finished reading To Kill a Mockingbird, you're probably likely to be forgiving. If you've just finished Lord of the Flies, you're probably likely to see people as ruthless and be less empathetic/gentle/forgiving. "It's a dog eat dog world."
Reminds me of something I've also noticed about axioms and old adages: they often conflict with each other. They're never true in any absolute sense. All adages are relative - they're designed to keep you away from extremes and rigidity. For instance "shoot first, ask questions later" and "only fools rush in" are both perfectly true and useful, but only some of the time.
Life is hard. Making good decisions is hard.
In my own review of *Noise* I concluded, "I think that the strongest conclusion that one can draw from KSS is that computers will continue to take over for humans in more fields. The project of reducing noise amounts to asking humans to behave more like machines. For that purpose, computers have the edge." https://www.econlib.org/library/columns/y2021/klinghumans.html
I thought this was intriguing but uncertain until I checked back who wrote it, and then I viewed it more favorably. Apparently I don't tend to read who the comment is by unless I stop to comment positively or negatively. You have proven yourself to me in the past, all the way back to Tech Central Station days, and I have quoted you for almost twenty years.
Sounds like most govt jobs should be among the first to be ai-automated away.
As I read your review of Daniel Kahneman’s book on flaws in human judgement, Rob, I thought about the “noise” comment platform features create. Often the comments seem to be like sheep, blindly following the crowd off a cliff! But more seriously . . .
Becoming aware of the bias or noise that can cause human error is helpful. And this book seems to do a good job of building a case for AI to build better algorithms/rules to predict the future than humans because they are less biased or distracted by noise.
That’s great to maintain the status quo, but isn’t human progress about breaking rules? Historic human breakthroughs are surprising. They couldn’t be predicted by rules of thumb because they made those rules moot, transcended them.
PS Happy Easter or the celebration of miracle of your choice.
Thank you for identifying this book. Just added it to my Kindle list... A VERY LONG LIST!
In my long corporate career where I ran the IT department and corporate project office, boy oh boy did I see this general theory confirmed... and still do today. When I get teams together to discuss issues, policies and problems, and seek their brain power to help contribute toward a more comprehensive and hopefully optimized solution because, I as the primary decision maker, don't hold the collective knowledge and judgement of that which comprises the room... I am usually disappointed with the results.
People in these group settings are beset with strong emotions over status, appearance, reputation. It takes a game plan to knock that crap out of their heads and get them into a process where they feel safe to speak their mind. But then too much safety ends up with junk input as the conversation gets hijacked by some of the idiots in the room.
Now increase the population through the collaboration of technology tools, and the mess explodes.
We thought that the Internet would be the great equality-maker by opening up the world to information access... and instead it has fueled mass psychosis and tribal group-think.
One of the reasons why I *imagine* this book didn't produce as much interest (beyond people who are immediately following the research) is a shift in the general mood of "the audience" (readers in the population at large). I somehow intuitively believe that until not all that long ago, maybe the early 2010s, people generally would have supported (affirmed) a statement along the lines of, "the more we know about how people think, the better we can act." These days, I somehow sense that people have become much more aware of the dangers of "reverse psychology," by which their own behavior becomes subjected to the control desires of "Nudgers," people who understand psychology well enough to "get people to do what they want" (rather than what the people themselves want). For me, the initial popularity of Nudge followed by a sharp (and difficult for the seemingly benevolent mainstream of intellectuals to swallow) rebuke of that approach (Brexit and Trump's success come to mind) is the consequence of average people developing some kind of "mental immune system" agains "the Nudge." If other people have the ability to "decode" my behavior and present me with stimuli I have little freedom but to respond to in ways according to those other people's values, maybe I am beginning to think far less favorably about their ability to decode my behavior...
Separate comment. Kahneman I mostly trust, but Sunnstein's Nudge did a lot of comparing apples to oranges to get to the conclusion he wanted about nudging.
I can see the aversion to adopting an algorithm instead of one's own judgement. But if you told me "This is the algorithm we have built out of your previous judgements, and we were able to go back and eliminate what we think were your worst days, when your appraisals were more random and had more insulting language" it might be different. If I looked at this and thought "Oh, this is above average AVI," I might sign on. If the algorithm was Bayesian and kept ratcheting up in ways I found convincing, that it more subtly captured bad days and had made a start on cracking the code for what seemed to be my wisest days, it might not be long before I gave it blanket approval.
Then all we would have to worry about would be who is minding the store. A different problem, but likely worse.
Fascinating and provocative.
I really enjoyed this review, thank you Rob. Whilst I respect Kahneman's critics of his studies, I still respect all that he has taught us about decision-making - much more insightful and helpful than Nudge theory by Thaler. Nudge has been exploited most terribly during Covid-19, as a result of which, I've lost all respect for behavioural science, as much as I was fascinated by it previously, actually I am ashamed of it. But cognitive deficits are always useful to know - those within us and around us.
"If you tell people they can no longer rely on their gut feelings and that they must follow a checklist or abide by an algorithm, they will respond with resistance because such policies inhibit their ability to pursue their own hidden agendas. "
Algorithms have their own biases built in by virtue of training data and the way the code is written, but bestow the added benefit of freeing the controllers of that algorithm from accountability for those biases. Nothing is perfect.
Nowadays we are all bombarded with so much more ‘awareness’ about considerations when making decisions that perhaps it accounts for the reason models/algorithms do better in the recent past with a few simple rules. I have wondered if the reason anxiety and depression are so much more prevalent in our age is now we weigh so many factors and if you are like me, sometimes you just throw up your hands and make a gut decision because your head hurts. And then worry about it. Used to be (and be better in my husband’s view), you steal a horse and this afternoon you are hanging by your neck from a tree limb. Very simple algorithm, horse thieves knew this but did it any how, the party doing the hanging did the job, dusted their hands off and went to the saloon for a few drinks and hands of poker then slept like a baby. Harsh maybe. But the lines were clear. Now we worry about if the thief had a pa who beat him, stole the horse because he had been bullied in school, the fact he only beat up the person he stole it from rather than shot them, the horses mental health, especially if he was the vehicle for hanging the thief, etc. Complexity increases noise it seems to me. And we are swamped by complexity and noise. And not sure if models do a better job when modeling complexity like climate or the weather come to mind. Perhaps the take away is knowing what situations call for simple and which for complex. And accepting that tradeoffs are a feature not a bug of decisions, can’t make a perfect decision that doesn’t prioritize one or a few things and discount a slew of others.
Thanks Rob for such a great in-depth book review. Noise has been added to my reading list.
As a pharmacist, I agree with noise affecting medical decision making. This is why many medical associations use diagnosis and treatment algorithms created from evidence-based research. They simplify the process and help eliminate noise.
Are Republicans harnessing noise to further their fascist agenda? I feel as if Trump supporters are all noise these days. They rely on lies, propaganda, and misinformation that can be easily disproven. Yet, they seem to be lost in the noise, i.e. the madness of crowds?
Very good article indeed. In times of much "noise" an unbiased review of a outstanding book is much needed and appreciated. Two centuries ago, the man who understood this topic was the Marquis of Condorcet. Being a mathematician, in 1785, Condorcet published his "Essay on the Application of Analysis to the Probability of Majority Decisions", one of his most important works. The best proof of his theory on masses being wrong, is the fact of his demise during the French Revolution. More to read on Wikipedia:
https://en.wikipedia.org/wiki/Marquis_de_Condorcet
Thanks for the shout-out to the incomparable battleaxe former mayor of Ottawa.