Chapter 5: “Human Error? No, Bad Design”
Summary and Review of “The Design of Everyday Things,” by Don Norman

Chapter 1: THE PSYCHOPATHOLOGY OF EVERYDAY THINGS
Chapter 2: THE PSYCHOLOGY OF EVERYDAY THINGS
Chapter 3: KNOWLEDGE IN THE HEAD AND IN THE WORLD
Chapter 4: KNOWING WHAT TO DO: CONSTRAINTS, DISCOVERABILITY, AND FEEDBACK
Chapter 5: HUMAN ERROR? NO, BAD DESIGN
Chapter 6: DESIGN THINKING
Chapter 7: DESIGN IN THE WORLD OF BUSINESS
Hi & welcome! Thanks for joining in on my journey through “The Design of Everyday Things,” by Don Norman. This series summarizes and reviews each chapter, highlighting important takeaways and asking questions about the content.
Don Norman studies error, and it makes sense why. To be able to design anything, you have to understand what types of errors are made and why humans make them.
I have to wonder why objects are made that aren’t easy to use. I asked a similar question as a software engineer: why are we writing code that is impossible for others to easily read and fix (should the inevitable occur and a bug creeps up or regression tests fail)? My cry for simple code is applauded in theory, but in practice, my seniors and peers preferred and beamed at their complicated programming as if indecipherability was a value-added feature.
Elegant, simple-to-understand products and processes are difficult to create, but it’s our responsibility to pursue this ideal. Maintainable code, usable products, and intuitive processes are financially and practically more efficient.
Section 1: Understanding Why There is Error
While physical obstructions or limitations are easily accepted and fixed by engineers and mechanics, mental errors made by humans are largely blamed on, you guessed it, the human. Not only do we blame the human for the error, but we also don’t change the cause of the error because we don’t understand why it occurs (why would we if we place blame on competence).
We’ve spent time with this book, and one thing we know from chapter 3 is humans automate their actions as much as possible. Floating on autopilot, any task or process that doesn’t feel natural will be difficult to perform. It’s not a human competence issue, says Don Norman. It’s bad design.
Root Cause Analysis
Instead of ignoring these errors by blaming humans, Don Norman discusses in this section “root cause analysis” whereby we explore an error, continually investigating until we find a foundational cause for the error. He says it’s often misused and the root cause is often found to be a person (still blaming humans!) instead of a process or product flaw. Furthermore, it’s rarely *one* issue that causes a mistake.
The Five Whys
Used by the Japanese (specifically the company Toyota), the “Five Whys” is an investigative philosophy. When a cause is found, ask why. Then ask why again. And ask it again. Keep going as far as you need: 5 is just a suggestion. Stopping too soon when investigating the root cause can cause huge mistakes.
“When people err, change the system so that type of error will be reduced or eliminated. When complete elimination is not possible, redesign to reduce the impact” — Don Norman
A major source of error is stress and societal/cultural pressures. Asking people to repetitively perform unnatural processes that can err while under stress is a recipe for disaster.
Section 2: Deliberate Violations
These societal/cultural pressures creep up often in this book as sources of stress and error. One example Don Norman lays out in this section is how we’re inclined to take risks. Not only does taking risks (cutting corners, for instance) create an immediate reward (shortened workload and problem-solving vibes), but also it can provide long-term rewards from managers who appreciate the ingenuity.
This behavior is all raises and praises until the risk causes an accident.
Section 3: Two Types of Errors: Slips and Mistakes
These are the two categories James Reason and Don Norman ascribe to error:
Slips
- result from not performing the intended action
- action-based — “wrong action is performed”
- memory-lapse — action is not performed or not evaluated
Mistakes
- result from the wrong goal being set
- rule-based — a person assesses the situation but applies the wrong rule to fix it
- knowledge-based — the original problem is misdiagnosed due to a lack of knowledge
- memory-lapse — the goal, the plan, or the evaluation are forgotten
Error and the Seven Stages of Action
Flashback to the Seven Stages of Action!
→Goal← →Plan ←→Specify← →Perform← →Perceive← →Interpret← →Compare ←
Slips occur at the performance, perception, or interpretation stages. Don Norman calls these “lower levels or cognition”. Slips occur because of subconscious interference.
Mistakes occur at the goal setting, planning, or comparison stages. “Higher levels of cognition”. Mistakes occur because of intentional decisions.
Section 4: The Classification of Slips
“An interesting property of slips is that, paradoxically, they tend to occur more frequently to skilled people that to novices.” — Don Norman
Capture Slips
- occurs when you start one action that has a similar starting pattern as another action but end up performing the end of the *other* action instead of the one you originally intended
- when you walk down the hall to get something in the far bedroom but instead turn into your bedroom confused and unsure what you were intending to do
- the two action sequences have to have an identical commonality
- are also a memory-lapse slip
Designer’s takeaway: don’t create processes with similar start sequences.
Description Similarity Slips
- correct action…wrong object
- the wrong object is similar to a distinguishing descriptive trait of the right object
Designer’s takeaway: controls should look vastly different from one another.
Memory-Lapse Slips
- usually caused by interruptions/distractions
- can occur at any stage of action
Designer’s takeaway: Simplify, create reminders, and force functions (from Chapter 4). If interruptions are an inevitable part of the design, incorporate multiple sensory modalities to reduce the intensity of interruption.
Mode-Error Slips
- when an object has one set of controls that operate differently in different states
- while one control system feels simple, the complexity of use is what causes the errors
- more likely to occur if the state of the object is not visibly communicated
Designer’s takeaway: Avoid modes or states if at all possible. If it’s not possible to avoid modes, make the mode/state very obvious at all times.
Section 5: The Classification of Mistakes
“In mistakes, a person makes a poor decision, misclassifies a situation, or fails to take all the relevant factors into account.” — Don Norman
Rule-Based Mistakes
- The situation is misdiagnosed and the wrong plan based on established rules is wrongly used to fix the problem
- The situation is properly diagnosed but the rule to fix the problem is faulty
- The situation is properly diagnosed, and the right rule is evoked: but the evaluation is wrong.
Designer’s takeaway: Communicate the status of the object clearly so the correct information for proper diagnosis is possible. If possible, provide a diagnosis or alternative causes of a problem when it arises.
Knowledge-Based Mistakes
- when the problem encountered is novel and requires active problem solving
- are slower to solve
Designer’s takeaway: Providing the status of the object clearly is helpful here as it is with rule-based mistakes. Don Norman suggests having a collaborative AI system to help diagnose or robust documentation.
Memory-Lapse Mistakes
- like slips, but memory-lapse mistakes occur specifically when the plan or goal has been forgotten.
Designer’s takeaway: Always assume your user will be distracted or interrupted and plan (design) accordingly.
Section 6: Social and Institutional Pressures
In Chapter 4 we discussed cultural constraints, specifically conventions, that limit human behavior. Here we explore how social pressures and cultural norms (like respecting managers or showing up on time) can lead to mistakes. This type of error cause is also difficult to sort out because individuals might not want to admit to them (or they just aren’t aware of them).
A favorite phrase in my household is “safety first!” and I think Don Norman would agree: praising and supporting the importance of safety above all else is the best way to ensure people make the right decisions that keep everyone safe.
Checklists
Checklists!!
Don suggests checklists are given a bad rap. Medical professionals try to avoid them because it’s beneath them, he writes. But checklists are excellent ways to ensure all the important steps of a sequence are carried out. I like ordered checklists, personally, but a disordered checklist is okay as long as there’s a reminder to complete steps that are skipped.
Section 7: Reporting Error
Reporting errors is a difficult task. The cultural stigma of making mistakes makes admitting to them and blaming others for them highly delicate.
This is another section that points to workplace culture as the cure for attitudes about error: we have to create safe spaces for people to feel comfortable owning up to and reporting errors.
- Jidoka — a Japanese practice at Toyota where the individuals who failed to report an error are the ones who receive punishment.
- Poka-yoke — a Japanese method at Toyota: adding signifiers, mapping, constraints, and forcing functions to any operation to avoid error
- NASA’s aviation safety reporting system — made self-reporting errors semi-anonymous!
Section 8: Detecting Errors
In general, slips are easier to detect than mistakes because the feedback from the action (if there *is* feedback) will indicate the wrong action was taken.
But when our plan is wrong from the start, it’s trickier to notice or acknowledge the error. Think of a time when you or someone you know put together a lot of information to diagnose a bug or issue with something only to discover you were wrong (despite all evidence to the contrary).
I’m thinking of conspiracy theories as I type this, but I know it’s happened to me when debugging software.
Explaining Away Mistakes
As humans, we have a tendency to explain away mistakes as not as serious as they might really be. Pile on several mistakes that are discredited or explained away and you’re heading to a potentially large mistake.
In Hindsight, Events Seem Logical
Only after an error has occurred do we look back at the series of events and think, “this was completely obvious.” In the present moment, however, how do we know what information is the most vital to make the logical connections needed to foresee an error? We can’t. There are simply too many competing factors.
Section 9: Designing for Error
Design Lessons from the Study of Error
- Use constraints to avoid error! Physical, Logical, Semantic, and cultural constraints can be intelligently implemented to avoid error
- Undo! Make actions reversible whenever possible
- Clear, salient messaging about actions that are permanent (or perhaps make no actions permanent?)
Sensibility Checks
This is frequently used in software or application development. When a user is prompted to enter information if you have the ability to determine whether the information entered is reasonable or sensible…do it.
Minimizing Slips
The answer is not to find ways to keep the user’s attention. What designers can do is make sure the controls are as far apart or as different as possible, eliminate modes completely (or just make the mode as obvious as you can), and provide feedback for days from actions performed.
The Swiss Cheese Model of how Errors Lead to Accidents
James Reason to the fore again, but this time with a metaphor about cheese and a winning argument to think systematically.
The Swiss Cheese Model shows several slices of swiss cheese lined up. An error occurs only if the holes of all the slices line up. In this scenario, there isn’t one “root cause” of error. There are multiple causes. Each slice of cheese has a hole (opportunities for mistakes) that led to the noticed error.
Don Norman suggests here to add more cheese slices or start closing up those holes. More cheese slices is like adding more breakpoints to the system to detect the potential for error.
Section 10: When Good Design isn’t Enough
When People Really are at Fault
It happens. Sometimes people are not trained well enough, sometimes they lie, sometimes they don’t meet physical requirements, and sometimes they are sleep deprived or under the influence of substances that impair their ability to do their job. Other times, as we discussed previously, people intentionally violate rules because they think there’s a better way or they’re taking a “gamble.”
It’s possible and it does happen, but Don Norman says bad design causes more errors than humans ever do.
Section 11: Resilience Engineering
Anticipating errors and properly designing systems with the capability to react to situations under extreme stress and pressure is resilience engineering.
Major stress tests, frequent drills, protocols: all these things are a sign of resilience engineering, and its effectiveness is measured by how well it can predict and address risk before any mistakes happen.
Section 12: The Paradox of Automation
“The paradox is that automation can take over the dull, dreary tasks, but fail with the complex ones.” — Don Norman
If automation does the tedious, repetitive tasks and frees us from devoting attention, what happens when automation encounters a more complex situation and detection is the only means of mitigating injury or loss? Is someone paying attention? Given everything learned about how people are not good at paying attention, I doubt it.
Section 13: Design Principles for Dealing with Error
This section really is a summary of everything in the book up until now, but I’ll provide some bullet points:
- Humans and machines are very different: people are creative, and machines are logical
- This incongruence allows the two to compliment one another
- Errors are just humans being humans when they’re expected to be machines. Good design helps humans translate their goals to the technology they use
- Put as much “knowledge in the world” as you can. The more knowledge you expect humans to hold in their heads, the more you risk errors occurring — it also allows non-experts to use the system
- Constraints are powerful. Use them.
- Design objects such that the two gulfs are covered: the gulf of execution (what am I going to do and how do I do it) and the gulf of evaluation (what happened and is it what I wanted to happen?)