Tip:
Highlight text to annotate it
X
The module on responsiveness by saying that the worst thing about going to the
doctor is waiting. I think I was wrong with that saying.
I think that the worst thing that can happen to a patient in a care process is
to experience a quality problem. Experts estimate that there are close to a
100,000 people dying because of medical errors alone.
I'm in no position to judge this number. However, I do know that patients suffer
from infections that could be avoided, Medications are given to the wrong
patients, and sometimes, surgical devices and instruments are forgotten in the body
after a procedure. This session is about quality.
There are two dimensions of quality. There is performance quality which
measures to what extent the product or services we're providing is meeting
customer expectations. Then there is conformance quality.
Conformance quality measures whether the process is carried out the way that we
intend it to be carried out. Our module focuses on conformance quality.
When we deal with conformance quality, we will notice that variability, once again,
is the root cause for all evil. Just think about it.
Without variability, we would either do everything right every time, and there
would never be a defect or we would do things wrong all the time,
And then chances are, we would get out of business very quickly.
In this first session, we'll introduce some basic probability tools to think
about with what likelihood we're going to make a defect in the process.
Consider an assembly line that puts together laptop computers.
The assembly line consists of nine stations, and let's say for sake of
argument that each of these stations has a probability of one percent of producing a
defect. Let me introduce some rotation.
Let's take this resource here, which is number six in the process.
We say that the yield of that resource is a percentage of units that this resource
produces according to specification. In this case, this is simply one minus the
probability of our defect, which is 99%. Moreover, we define the yield of the
process, as a percentage of parts that are produced at the end of the process,
according to specification. The yield of the process, of course,
depends on the individual years and defect probabilities of the resources that make
up for the process. In our case here, since we have a linear
process slow diagram and a computer that comes out at the end has to be produced
correctly at every one of the nine steps. The year that separates a product of the
individually yields. In other words,
It says one minus, or defect probability of one%, raised to the power of nine,
which is about 91%., You notice here the power of the exponen, if I take just for
the sake of illustration, if I take even the 99% probability of doing something
correct and I have many, many steps, Say for the sake of argument, I have 50
steps in the process, my probability of producing something correct at the end of
the process, my process yield, is about 60%.
That's even small defect probabilities in assembly lines or in processes with any
operations can accumulate a lot of problems at the end.
So those are the ideas of yield, process yield, and defect probabilities.
Not all processes require that every step is carried out according to specification.
Some processes have built-in redundancy and so they can afford that their step in
the process is carried out with a defect and still the overall quality of the
output is not affected. Let me illustrate this concept of
redundancy with the classic case study of the Duke Transplant Center.
This is the rather sad story of seventeen year old girl Jessica Santillan.
Jessica died following a heart-lung transplant in the Duke Transplant Center.
The reason for that was that there was a mismatch between Jessica's blood type and
the blood type of the organ donor. The story started when Dr.
Jaggers, who was Jessica's surgeon, received a phone call by the New England
Organ Bank in the middle of the night. The New England Organ Bank offered him the
organs for another one of Dr. Jaggers' patient.
Dr. Jaggers felt that the organs were
inappropriate for this as a patient, But in part of the phone call, asked if
they could use them for Jessica. The New England Organ Bank somehow assumed
that if Dr. Jaggers was asking for the organs for
Jessica, they would match the blood type. Vice versa is the work flow at the Duke
Transplant Center and Dr. Jaggers explicitly assumed that if the New
England Organ Bank were offering the organ for Jessica, they would check the blood
type. At the end of the day, nobody checked.
In the aftermath of Jessica's death, a group experts were put together to assess
what went wrong in this process. They estimated that about one dozen
caregivers had the opportunity to notice a mismatch.
Typically, a single mistake in this type of process would have been caught.
If one person forgets to check the blood type, well, there are eleven others who
could have noticed. But if twelve people, at the same time,
all make a defect at once, the outcome is tragic.
British psychologist, James Reason, has developed a model to explain accidents and
disasters. This model is referred to as the Swiss
cheese model. The idea of the Swiss Cheese models is as
follows. Think about a slice of Swiss Cheese.
In the slice, we have a couple of holes and we think of a hole as a defect.
Now, the Swiss Cheese model, which doesn't look at one slice of cheese in isolation,
but asks what happens when you stick multiple slices of cheese on top of each
other. With a certain small, but positive
likelihood, you can stack up the slices of cheese and all the defects line up and the
outcome is tragic. This is the idea of redundancy.
As you add multiple layers of cheese on each other.
It is less and less likely that you can see though all the slices at once,
But, again, the outcome probability is still not zero.
So what's the probability of a defect in a situation like this?
Now, if we draw this as a process flow diagram, redundant check typically
corresponds to a parallel path in the process flow diagram.
I've illustrated this here with these three paths that are all happening on the
way of producing this flow unit. Now, the orange boxes here are the
redundant test point. What's the probability if each of them
makes the defect with a one percent likelihood?
Well, the likelihood of us making a defect at the very end is simply 0.1 raised to
the power of three. If every one of them catches the defect,
the redundancy kicks in and the defect is detected.
So in order for the defect to happen here, at the end, all three of them have to go
wrong. We can then define the yield of this
process as thirteen. Minus 0.01 raised to the power of three.
So you notice how the process flow diagram,
A true understanding of what's happening in the process is driving how the
individual defect probabilities get aggregated to an overall defect
probability into the process here. In this session, we have discussed two
examples of defects. In the assembly line example, we saw a
situation in which a defect anywhere in the process would leave to a defective
unit of flow at the end. In the Swiss Cheese situation, we could
afford to have some mistakes in the process,
But due to redundancy, this would not necessarily lead to a bad unit of output.
Multiple things have to stack up in a bad way to lead to that fatal outcome.
We've talked about how you can look at the process flow diagram, and then think about
how to aggregate the individual defects, and compute an overall defect probability,
and that allows you then to compute the process here.
When improving processes like the ones we discussed, especially the Swiss Cheese
situations, it's important to not just go after bad outcomes.
Hopefully, these bad outcomes, at the end of the process, are really rare.
Instead, you want to look at internal process variation.
This is the idea of near misses. It's also an idea we will see in more
detail in the session on sig-, sigma. The worst resources are those that
sometimes work and sometimes they don't. If a resource always works and never does
any defects, wonderful. If it is always broken, and everything the
resource touches gets defective, we'll figure that out pretty quickly.
In this session, we have used simple probability theory to describe the
likelihood of a resource producing a defect.
We can then use defects in our understanding of the process flow diagram
to describe the percentage of flow units that are produced correctly.
We refer to that number, as the yield of an operation.
Now, not every time, a resource does something the wrong way,
We'll get a yield loss at the end of the process.
Some defects and internal variation are absorbed by other activities.
There is redundance oftentimes built into the process,
However, understanding such deviations in the process,
Even if they do not lead to fatal consequences at the end of the process, is
a very important point of a good quality management program.