Thoughts: The Up Side of Down by Megan McArdle
Megan McArdle’s The Up Side of Down is good survey of literature about the science of failing, resilience, and success. Books of this sort, written for popular consumption, generally suffers from the three ring binder effect; it’s a loose collection of research and interviews, organized by themes. In some cases, the research has been presented in other contexts, both by the researchers themselves (Daniel Gilbert and Jonathan Haidt) and by other popularizers of behaviorial science.
Luckily, Ms. McArdle’s approach is disarming and charmingly self-deprecating. Her binder, as it were, ties together her own failures to the research she presents. Her failure to find a job, her inability to move past a relationship, and her experience combating 9/11 Truthers provide a human face to the statistics of neuropsychology research. Most importantly, she demonstrates the power inherent in recognizing when a path is failing and taking action to shut it down. Loss aversion supplies a motive in maintaining status quo, and variations on this theme are explored.
As with such popular science books, there is a hint of the prescriptive in her book. Ms. McArdle supports a more generous approach to mistakes and wishes that political forces would stop moving towards harsher punishment for any mistakes.
Despite the compelling theme, and one that I tend to agree with it, I find these books shallow. To Ms. McArdle’s credit, I would absolutely love for her to expand on just about every chapter. As it is, she combines general lessons learned from both investigators and from her life. It is effective. Take it for what you will; if you want more, follow up on her bibliography. The book is compelling.
I found useful lessons, especially with emphasizing the need to give kids a safe place to fail. Ever since I became aware of research regarding the contradictory effect of praising intelligence rather than effort (actually, pointing out anything aside from effort), I’ve focused on the process. (There’s actually new research suggesting that merely visualizing directions – up versus down, flying versus digging – might affect cognitive tasks due to the emotionality of the visualization.) It’s actually nicer and easier in some ways, because it gives adults cues to talk about specific things about the child’s project (“Oooh! I like how you did the trees and arranged them according to perspective!”).
Ms. McArdle’s book reminds us that it is not only OK, but necessary to identify faults. Especially when younger and with lower stakes; the kids can immediately see where they went wrong and they can correct it. The key is to be gentle enough to call attention to the mistake but not dwell on it. Make it feel like a bump; comment and move on.
Although I wish Ms. McArdle spent more time on developing the idea and presenting more research, I agree with her that the ability to remain calm and not focus on the emotional sting of shame and feelings of failing is absolutely crucial to moving on. Perhaps becoming accustomed to the iterative process of failing/identify/improve will help desensitize kids to the emotional turmoil of being wrong so they eventually focus on the substance of criticisms.
I happen to think there’s a lot to learn from Ms. McArdle’s book, and I can draw many parallels to the process of science. My colleagues and I have joked that we are in an asymmetric relationship: the science has all the power. We work, but our feedback is generally negative. Our advisors and supervisors simply give comments for improvement (ask anyone about the process of writing a grant or manuscript), only to receive more feedback upon submission – the paper is rejected/won’t fit our journal. If accepted provisionally, we will get more feedback from reviewers. Grants also get scored and we receive comments.
But we all understand this is the process. The worse comment for a grant is no comment at all. The grant being so bad that it was not worth the reviewer’s time to improve on it.
And of course, a lot of our time is spent on dealing with no or opposite results: no change where change is expected. Change were stasis is expected. The effect is too small or opposite what you predicted. And things break and stop working all the time. A lot of these errors come down the the experiments and analysis (perhaps an incorrect baselining or normalization.)
But when experiments start pulling together and a paper is eventually accepted, it is exactly like the first sunlight after an arctic winter. The rest of the time, it’s that arctic darkness.
Sorry; do I sound bitter?
I’m sure authors/writers/reporters all have analogous stories. The point is that success is more about attrition and self-selection. The people who thrive and have careers all continue to produce and deal with failures as if they are minor. They integrate criticism, iterate, and improve. So yes, I pretty much buy into Ms. McArdle’s thesis.
One thing I like about the book is that she tackles the issue of normative errors and accidents. The distinction is important to make, even if the definitions are not necessarily clear cut. Accidents are events that occur and couldn’t really be accounted for in the planning and execution. The operative word is could. Many things can and do happen, but the definition of those accidents happening is that it is coincidental, with the unfortunate victim falling prey to a low probability event.
Normative errors arise during process and execution, due to missed steps. The word here is should. Generally, there are a few things that should have been done, but weren’t. The two seem separated by degree; I suppose if you find yourself linking a series of events – if only I had walked a few steps quicker or slower, I would have turned the corner and seen the the guys backing out with the large pane of glass instead of walking into the glass – this probably is an accident.
A mistake can probably be traced to something one did or didn’t do, and a compounded mistake just means many people failed down the line. I can see how some readers might want clearer explanations.
But the point of the book is not explicitly about mistakes, but how we recover from them.
Ms. McArdle put together a rather compelling book. She connects threads in research on attention, motivation, and economics and drew new observations. I especially liked her chapter on tunnel vision (“inattentional blindness”). She starts with the description of Daniel Simons’s and Christopher Chabris’s experiment with having students score the number of times a basketball team, in a video. Afterwards, they ask the students about the number of passes – and whether they saw a gorilla mascot run threw the middle of the court, between the players. She seques into an analysis on the Dan Rather/President G.W. Bush National Guard story that cost Mr. Rather his job. Dan Rather made the mistake of defending his decision, rather than simply working to figure out whether something went wrong.
There were apparently a whole chain of mistakes, but the point is that there is power to simply acknowledge he could have been at fault. The proper play would be along the lines of Ira Glass’s signing off on Mike Daisey’s Apple story, where Mr. Glass admitted he was wrong and then spent a subsequent hour on analyzing the mistakes he and his team made – while rectifying the original story. A hot-of-the-press example is in how Bill Simmons dealt with the Dr. V’s Magical Putter story.
I do hope people read Ms. McArdle’s book. I think she has a talent for providing proper context and tackling the best and most relevant arguments between opposing views (see her chapters on bankruptcy, welfare reform, and moral hazard.) For the short length of time reading the book, I think readers will gain an immeasurable sense of well-being as they learn to love mistakes.