Tip:
Highlight text to annotate it
X
Art Reingold: Good morning. So, one administrative announcement about the exam. Ana Maria tells
me this is posted on the website but I know we do have at least a couple people in the
class who are registered with the disabilities office. And if you are one of the people who
needs accommodation during the midterm or if you're somebody who has a scheduling problem
and can't be here on the day of the midterm we want to accommodate you, but you need to
let your GSI know so we can make arrangements for you to take the exam at a suitable time
and a suitable location. So if you don't do that before the exam we're going to have a
hard time accommodating you. I understand so far nobody has been in touch with the GSI
about this issue. Maybe there's nobody who needs to take the exam at a different time
and different place, but if you do you need to let us know. Okay? So any questions about
that? And we do have the problem, I know that not
everyone who is in the class comes on a regular basis. So if you're listen in the unseen audience
there will be more people here on the day of the midterm than there are today. But we
have a very limited period of time. We really only have the room for 50 minutes for the
entire midterm. So if you need accommodation and need to be taken somewhere else or need
additional time please let us know. Okay. So, we're going to finish up the discussion
today about prospective cohort studies and then on Monday Craig Steinmaus who is an expert
particularly in occupational and environmental epidemiology will talk about historical or
retrospective cohort studies. I may touch on them a little bit at the end today. And
then Wednesday Craig will talk about cross sectional studies. So I then want to reiterate
this notion. Yeah? Volume. Okay. Sorry. Is this better now? There are a few seats available
down front by the way in case I don't have TB so you can safely sit here. It's lonely
down here. So, again I want to point out that this use
of the word prospective has a problem in that it means two completely different things.
So in one sense all cohort studies are prospective. That's in the sense that by definition if
it's a cohort study we start out measuring exposure and we look at the risk or the rate
of disease in different exposure groups. Exposed, unexposed, high level exposure, low level
exposure, whatever. But prospective cohort studies are studies where we build a cohort
today and follow people into the future. And as Craig will talk about on Monday we can
go into the past and build cohorts, 10, 20, 30 years ago and follow those people up until
today. Needless to say there are advantages and disadvantages to each of these study designs.
So I want to illustrate a few other things with prospective cohort studies. So, this
is I think an interesting study that looks at the fundamental research question is, does
your blood lead level influence your IQ. And I suspect many of you know that we know
quite a bit about the dangers of high lead levels. Um, but the question is whether lower
levels of lead do damage to the central nervous system and so this is a cohort study done
beginning back in the 1970s in Australia and here you can see that fundamentally what they
did is they took a group of infants, about 700 infants and collected blood samples from
them. And one of the things we almost invariably do in cohort studies is collect biological
samples at the beginning. Um, and frequently that's one of the downsides
of case control studies is you don't have blood samples or other biological samples
from the points in time you would like them, but in a cohort study you typically collect
samples at the very beginning so you can see they basically enrolled mothers before delivery
and then followed their babies up to several years of age. And by definition, if you're
going to be interested in what happens to babies when they're seven years of age, the
study needs to take place over at least 7 or 8 years in order to collect the information.
And one of the problems with prospective cohort studies is they can take a very, very, very
long time to get the answers you're interested in.
Um, so, this is just an example of some of the data from this and the other thing I want
to point out just going back for a minute, as I've said, here the basic comparison is
what is the average IQ. IQ is a continuous variable and the basic question is, is the
mean IQ different in children with high lead exposure versus lower lead exposure. As I've
said, when the outcome is continuous and you're simply comparing the means you can generally
do that with a much smaller sample size as opposed to a dichotomous outcome, lung cancer,
no lung cancer. Here you can see this is a relatively small study. It's only about 700
babies and that may sound like a lot and it is a lot of work but it obviously pales in
comparison to 50000 doctors or a hundred thousand nurses.
So this is a relatively small study, but the outcome is basically a continuous variable.
So here you can see basically blood lead concentrations. And these are the blood concentrations at
the particular age and the IQ at the same age.
So here you can see, for example, this is IQ at age seven. Baseline blood lead. But
here is, um, I can't tell which of these is which. Here is IQ at age seven and lead at
age five. So you can correlate lead at birth with IQ at any point in time rater. Or you
can correlate lead at the same age with IQ at that age. Right? And you can do all these
different types of comparisons, but what you can see is basically all of these show that
even at relatively what used to be thought of as safe blood lead levels there does appear
to be a correlation with IQ. And the higher the blood lead the lower the
IQ at that point in time or when the child is older.
And it's studies like this which have led to the conclusion that in fact what we used
to think of as safe blood levels are in fact still doing damage and we need to lower the
threshold even more than it was in the past. And this is here, you can, in this instance
they take some way of averaging lifetime lead levels, I don't know how you do that, but
you somehow take an average of lead levels taken at different ages and again, here you
can correlate with it verbal performance, different measures of IQ and basically show
that blood lead does correlate with IQ in a negative way.
Um, so, low level exposure to lead during early childhood inversely associated with
neuropsychological development during the first seven years of life and a feeling that
this may in fact be permanent. That the only way to know that of course would be to follow
these children for another 10 or 20 years and see if any of this damage is made up later
in life. So this study doesn't answer the question
of whether this persists, this damage persists into adulthood.
This is one of my favorite studies that I always used to show my children. Not this
one, another one I'll show you in a minute. Here this is a study, the impact of childhood
adversity on longevity. Okay. So the question here is, does being raised in a, in a poor
environment have a negative impact on your survival. And so this is a study done in the
neighborhood I grew up in on the south side of Chicago where they took the entire population
of children beginning first grade in a variety of schools and they basically assessed their
family and childhood adversity by looking at family type, frequency of residential moves,
whether the family was on welfare. Frequency of corporal punishment, etc.
And then they basically followed these children for a couple of decades to see what happened
to them. And then correlated these different types
of adverse exposures in childhood to look at length of survival and whether or not the
children were at increased risk of dying or not.
And here the particular interesting finding is they found that if you were a foster child
you had about a 16 fold, 17 fold increased likelihood of dying, this is in children on
the south side of Chicago. So this is not in a developing country. This is in modern
day America that foster children were about 17 times more likely to die between childhood
and reaching about age 21. Um, so, another example, if you will of a
cohort study. Many of these variables didn't correlate, but being a foster child did.
And here the other thing I'll point out is as I've said, in theory, not in theory, you
can calculate risks and rates directly in cohort studies and risk and rate ratios, but
you can also calculate odds and odds ratios and many cohort studies do calculate odds
and odds ratios. So there's nothing inappropriate about calculating odds and odds ratios in
cohort studies. Um, okay. So this is the study I meant that
I used to warn my children with. So here the basic question is whether watching television
is bad for you if you're a child. So does anyone know what the average amount of time
spent in front of the television is by children in the United States? Number of hours per
day of television watching? >>>: (Inaudible).
Art Reingold: Five. Four. Actually I think it's closer to seven.
>>>: What? Art Reingold: Now some of that now has been
replaced by time in front of other types of screens. So you could say computer screens
instead of TV, but in point of fact the average child in the United States spends somewhere
between 4 to 7 hours a day in front of a television screen and you may find that astonishing.
I find it astonishing but apparently it's based on good data. So the question is does
watching television, is that bad for your health? Okay. So this is a cohort study done
by some friends of my in New Zealand. And they took all of the children born in this
town between 1972 and 1973 who still lived there at age three and asked to participate.
They had about a thousand kids they enrolled. And then they followed them for the next 26 years.
So you can tell cohort studies, prospective cohort studies can take a long time if you
are interested in longterm outcomes. Right? 26 years later. And then what types of measures
did they look at to correlate with TV watching. Well, so first of all they collect information
about television viewing at various ages. Okay. Here they talk about a composite child
and adolescent viewing calculating as the mean viewing hours per weekday between ages
5 and 15. Okay? And so this is to simply show you, this is
in New Zealand so these categories are less than one hour of TV per weekday, 1 to 2 hours,
2 to 3 and greater than three hours of TV. And this is likelihood of being overweight.
Some measure of physical fitness, serum cholesterol, smoking and systolic blood pressure. And basically
increasing watching of television correlates with all of these adverse health effects.
So, does smoking excuse me, does TV watching cause smoking? We'll talk about whether you
can talk about causality in a few weeks and whether a correlation constitutes causation
or not. But clearly there's a very strong correlation between watching television and
all of these aspects of poor health. And poor health literally a couple decades
later. So television watching in childhood and adolescence associated with overweight,
poor fitness, smoking and raised cholesterol in adulthood.
Okay. My children weren't convinced by the way. Now this is study I want to talk a little
bit about the ethics of cohort studies. So in general we worry a lot about the ethics
of randomized control trials. Right? Is it ethical to randomize people to one drug or
another to a vaccine or a placebo, to some intervention or to leave them unintervened
on. Right? And so the question is does that same thing pertain in observational cohort
studies. Are there ethical issues about following people where someone is making a decision
but you are not randomly assigning them. So here the issue of interest is some of you
may know E. coli 0157 is a serious cause of food borne illness causing bloody diarrhea
and in some children there's a they develop damage to the kidneys and a disease called
hemolytic uremic syndrome, HUS, which can do permanent damage to the kidneys, in some
cases requiring a kidney transplant or at least longterm dialysis. So it's a pretty
serious outcome. And here the question is whether antibiotic
treatment of the diarrhea increases the risk of this hemolytic uremic syndrome. So, in
other words, why might antibiotic treatment actually be bad for you? People generally
think of treating bacterial infections with antibiotics as a good thing. Right?
So why might it in fact be bad for you? Why is this a plausible hypothesis?
>>>: (Inaudible). Art Reingold: It might wipe out the good bacteria
as well as the bad bacteria. That's possible. That's actually not the mechanism here, but
that's certainly always an issue. Any other suggestions?
>>>: (Inaudible). Art Reingold: Yes.
>>>: (Inaudible). Art Reingold: I'm asking you, so you may be
right. But what mechanism are you positing for that?
>>>: Well (Inaudible). Art Reingold: So that's exactly the issue.
Fundamentally the damage being done by this infection is through a toxin made by the bacterium.
It's possible by killing all the bacteria very quickly you unleash a large amount of
toxin all at once. Because all the bacteria are suddenly dead.
They break open, the toxin is released and a huge bolus of toxin goes to the kidney.
So that's the question, whether antibiotic treatment increases the risk of this very,
very serious health outcome and that was considered a reasonable hypothesis when this study was
done. So, nobody is randomizing these children to get antibiotics or not. It's been left
up to the physician treating each child to make that determination. Do they give antibiotics
or don't they give antibiotics? So if you ask, well is that an ethical study if you
have this suspicion most people would argue it's a perfectly ethical thing to do. Your
doctor decides what your treatment is based on his or her clinical judgment. Right?
But I suspect there might be people who disagree with that. In any event here you can see,
so you basically get a lot of laboratories to report when they have a child less than
ten with disease of diarrhea caused by this organism. The exposure of interest is whether
the doctor treated the patient with antibiotics or not.
And the outcome of interest is whether they developed this very serious kidney ailment,
hemolytic uremic syndrome or not. So, um, that's the question whether it influences
the risk or not. And so here you can see the results of this study and in fact antibiotics
given during the first 17 days are associated with a relative risk of around 17. So in other
words children given antibiotics are about 17 times more likely to get this very serious
ailment than children not given antibiotics. Right?
And so this really confirmed what you said in the back, which is that in fact the antibiotics
are a potentially very dangerous thing to use in kids with this type of infection.
Okay. Now you could argue whether this was an ethical study or not but as I've said in
general we leave it up to the individual clinician to decide what the treatment for a patient
is. Okay. And a lot of treatment has been changed
as a result of this study. Okay. Um, so, this asks the question whether condoms prevent
transmission of human papillomavirus. Now, does anybody know why that's a politically
interesting question? Or why that became a political topic? Whether condoms are effective
at preventing HPV infection transmission. Yeah.
>>>: (Inaudible) contraception. Art Reingold: Because the Catholic church
isn't in favor of the use of condoms. >>>: (Inaudible).
Art Reingold: They are or they're not? Okay. I actually don't know what the Catholic church
position. Any other suggestions about why politically this was an important questions
in the United States? >>>: Maybe because it has to do with male
contraception. Art Reingold: Male contraception for women's
health. >>>: (Inaudible) responsible (Inaudible).
Art Reingold: Okay. The basic question is there was a strong suspicion that condoms
while they might prevent transmission of gonorrhea and *** and syphilis don't prevent, doesn't
prevent the transmission of human papillomavirus. And that therefore promoting condoms was a
bad idea. Okay. And people who were opposed to people having sex, particularly young people
having sex used the possibility that condoms don't prevent HPV transmission as a reason
why condoms should not be promoted. Okay. That even if they block transmission of other
sexually transmitted infections they don't block transmission of this virus. Therefore
it's a bad idea to tell people that they can have safe sex using condoms.
Because they don't reduce the risk of this particular infection. Why might they not reduce
the risk of this infection? Why is that a plausible hypothesis.
>>>: Because it (Inaudible) not necessarily fluid base.
Art Reingold: Right. Because the virus can be present on the skin. Simply wearing a ***
may not offer the protection that you would need to have. So this was actually a politically
very charged question. So the background here, HPV infection is very common in sexually active
young women. I'm going to show you in a couple weeks a study we did at Berkeley among Berkeley
undergraduates, actually, all Berkeley students in which the prevalence of HPV infection,
the cross sectional prevalence was 46 percent. So, almost half the women on this campus had
infection with HPV. So it is a very common infection. Condoms reduced transmission of
*** and gonorrhea and some other sexually transmitted infections, but perhaps don't
have an effect on transmission of this virus. And a lot of controversy about whether or
not we should promote condoms for safe sex or not.
Um, and the concern of course among some people is that promoting *** use will increase
*** behavior. It will make it seem as though sex can be safe. People have more sex and
we don't want people to have more sex. Um, so, this is a study done by a doctoral
student at the University of Washington. A cohort study intended to answer this question
whether condoms really do prevent transmission of HPV.
So what did she do? She designed a prospective cohort study. She took females, University
of Washington students, who had not had intercourse or were having their first episode of intercourse
recently. Some other few relevant exclusion or inclusion criteria. And then what did she
do with this cohort of young women? She didn't randomly assign them to use condoms or not.
Right. That presumably would be a difficult thing to do. So what did she do? What would
you need to do in order to answer the question whether condoms blocked transmission of HPV
infection? So you would need to collect information about *** use. You would need to collect
information about who acquired HPV infection or not. So you would need to follow women
over time and, but you'd need to collect this information on a regular basis.
Right? So, in fact this is an example in which people are asked to keep a diary in order
to assure a high level of quality of the information. So these women were basically asked to keep
a diary in which they recorded every episode of intercourse and whether a *** was used
or not. So, you can see she sent out 24000 letters
to University of Washington students. She got 243 eligible subjects responding. Some
declined to participate. She enrolled 210. A number had to be excluded because they never
had intercourse and ended up with a total of 82 women in her study in this cohort study.
So she used a web based diary in which women were asked to record this information on a
regular basis. Daily information. Number of new partners, number of episodes of intercourse,
frequency of *** use. And then had to retest them periodically to see who had incident
new HPV infection. Right? So this gets a little complicated. This is
an example where she ends up calculating the hazard of HPV infection. Hazard is a form
of incidence density rate. Hazard rate. Um, and so she got to look basically at, you know,
what's being reported and then when the various samples were obtained in order to do this
and calculate person time and rates in person time compared to *** use to no *** use.
So here you can see this is per hundred patient years at risk. And then you can basically,
she divides women up in terms of *** use less than five percent of the time, a hundred
percent of the time and a couple of groups in between.
Okay? And now she can answer the question whether condoms reduced the risk of HPV infection
or not. This would be a very difficult study to do without assembling a cohort and following
them prospectively over time. Right? HPV infection is clinically silent.
So if you weren't doing this as a prospective cohort study you really couldn't answer this
question. Um, and so here you can see in terms of frequency
of *** use basically among the women who reported 100 percent of *** use by their
partner, there were no HPV infections. Basically the hazard rate was 0. In other words, condoms
were highly protective. And there was basically some perhaps protection with intermediate
*** use. But basically fully compliant use of condoms was quite effective in reducing
transmission of HPV, at least in this cohort of women.
Okay? And I guess in this particular analysis where
she adjusts for various things it's not quite a hundred percent. It's more like a 70 percent
effective against high risk HPV type. So I take that back. Not a hundred percent, but
70 percent effective against high risk HPV types.
Okay. Um, another example IQ at early adulthood and mortality by middle age. So does your
IQ correlate with your likelihood of dying? This is a cohort study of over a million people.
So cohorts can be very large as opposed to very small. And in this case the group in
Sweden apparently would routinely do IQ testing of everybody being conscripted into the military.
So this is an example of a cohort that's basically the entire age cohort in Sweden in this group.
They administer an IQ test to all of them at the time their conscripted into military
service. And then they see what happens to them in terms of their mortality experience
over the next several decades. Okay?
And for those high IQ people in the audience, there's good news here. That basically high
IQ is correlated with a substantially greater survival and a lower chance of dying. At least
in Sweden. I don't know if that's true in the United States, but at least among Swedes,
people with high IQs are less likely to die than people with low IQs. And that's adjusting
for educational status and a number of other variables.
Um, and it was true for death from a variety of different things, suicide, coronary heart
disease, accidents, death from other causes. Basically every cause of death there was a
protective effect of high IQ. Okay.
And this is another important question. Some of you may know that there has been a debate
about whether obesity is bad for your health or not. There's been an argument that obesity
is in fact not necessarily bad for your health. Right? That we should not be biased against
people who are overweight because it's not necessarily indicative of any health problem.
So, this asks the very important question, does your body mass index predict your risk
of dying? Now in point of fact this is what you might call a meta analysis. And Craig
is going to talk about meta analysis a bit. This group actually took 57 different cohort
studies from rich countries and combined the data from all 57 studies into one mega meta
analytic cohort to look at the question of whether your body mass index predicts your
likelihood of dying. Why would they exclude the first five years of follow up? So they
don't look at mortality the first five years after people are assembled into a cohort.
Why might they do that? >>>: Just because they want to make sure (Inaudible)
factors (Inaudible). Art Reingold: Any other suggestions. I'm not
sure I get how that would work. >>>: They were trying to lose weight to look
better. Art Reingold: Trying to lose weight to look
better. >>>: (Inaudible).
Art Reingold: Okay. Well the answer is the concern is at the low end of the weight spectrum
people who are underweight, why might people be underweight? What are the reasons people
might be underweight? >>>: Sick.
Art Reingold: They are sick. They might have cancer for example. Having cancer, being very
sick is one of the reasons you might be underweight. So they basically want to get rid of all that
mortality associated with being sick and underweight as a result of being sick.
Okay. And the assumption is most of the people are going to die because of those illnesses
that have made them underweight will die in the first five years. Okay. So it's actually
more of a concern about the underweight part of the spectrum than the overweight part of
the spectrum. And so what you can see here is in point of
fact. So this is, this is basically the risk that all cause mortality. And this is BMI.
And what you can see is this U shaped curve which has been reported in many studies. And
basically having a BMI between about 20 and 25 has the lowest mortality experience.
And with increasing BMI above 25 mortality goes up substantially. But with lower and
lower BMI under 20 or 25, mortality also goes up.
So the optimal BMI for maximal survival in fact is not the lowest BMI. This is not a
straight line relationship, but it's a U shaped relationship. And this is excluding the people
because of the first five years taking out the first five years and people who might
have cancer and be underweight. It's removed the mortality in the first five years.
Okay. So being overweight is not good for your likelihood of surviving, but being underweight
isn't good either. Okay. And there is a very strong correlation
between BMI and at least mortality. Um, and that's basically again shown here. This is
for people 35 to 79. And again basically the lowest is at about between 20 and 25. This
is current smokers. This is never smokers. Okay. And here's the basically the conclusion
they come to. The excess mortality below 22 is due mainly to smoking related diseases
but is not fully explained. All right. And here's one of my there is
some good news here. Yes. >>>: (Inaudible).
Art Reingold: It does not include body fat percentage. It's strictly BMI. BMI is obviously
just calculated with just height and weight. So the argument of course is that you can
have a high BMI and be very physically fit if it's all muscle. Right? And that's true.
But that doesn't reflect a very large proportion of the people with a high BMI. Most of the
people with a high BMI are not NFL football players. Right? Who are solid muscle.
So that is not taken into account in this study.
Okay. Well there is also some good news here for the chocolate lovers in the audience.
Here the question is does chocolate during pregnancy reduce the risk of preeclampsia.
Okay. So why might that be a biologically plausible hypothesis?
>>>: (Inaudible). Art Reingold: Reduces blood pressure because?
Because it makes you feel better. Makes you happy. Eating chocolate makes you happy. Possible.
But of course chocolate also might have some biologically active ingredients in it. Right?
Just like caffeine. Just like coffee. Chocolate has biologically active ingredients in it.
So I assume that's so for those of you who don't know, preeclampsia is the development
of high blood pressure and related problems in a woman during her pregnancy. And it can
be a very, very serious potentially life threatening complication.
So, this group took a prospective cohort of around 2500 pregnant women. Measured their
weekly intake of chocolate as well as some other things. And then they looked at their
risk of developing preeclampsia. Pretty simple little cohort study.
And here you can see the distribution of chocolate consumption. And so here you've got, um, the
dark bars, normal tensive, normal blood pressure. The light bar is high blood pressure. And
this bar is preeclampsia. And you can basically see that there seems to be a relationship
four or more servings of chocolate per week seems to be related to a lower likelihood
of developing preeclampsia. And so here the odds ratio again calculating odds ratio cohort
study about a 50 percent reduction in the risk of preeclampsia of eating four or more
servings of chocolate per week. So for the chocolate lovers in the audience, this is
good news. Okay. If you're interested in the details
you can look them up. >>>: (Inaudible).
Art Reingold: Pardon? >>>: It wasn't randomly assigned.
Art Reingold: So we're going to talk in a couple of weeks about causal inference and
whether correlations established a cause effect relationship or not. But what you're saying
is certainly true that when people have been randomly assigned to one exposure group or
another we have a stronger belief in a causal relationship because of the problem of confounding.
That is, women who eat chocolate at high levels may differ in other ways as well. Right? As
opposed to if we randomly assigned chocolate consumption in theory all those other things
would balance out. So you are correct. It may be something correlated
with chocolate intake and with risk of preeclampsia. But we're going to talk about that in more
detail in a few weeks. Right. Okay. Now this is a study you should be familiar
with. Um, because it's going to cost a lot of your tax dollars. So this is a cohort in
the making. And epidemiologists around the United States have concluded that if we really
want to understand exposures in utero and in childhood and how they contribute to health
later in life, we need to assemble a cohort of a hundred thousand babies followed from
in utero through to age 21. So you enroll a hundred thousand pregnant
women and you follow their babies for the next 21 years. I think you can imagine this
is a relatively expensive thing to do. This would be quite an undertaking.
So, follow them through their twentyfirst birthday. You measure during pregnancy infections,
pollutant exposures, psychosocial factors and the idea here is to measure the social
and physical environment contribution to specific diseases.
The initial estimate. And this is now gone up enormously. The initial estimate was that
this study would cost almost $3 billion. And that was a low, low estimate now in retrospect.
Okay. So, this is a very controversial issue because
it would suck up a huge amount of the NIH budget and some people wonder whether this
is a good use of your tax dollars or not. But the epidemiologists who would be doing
this study certainly think it's a good idea. And so there are a number of interesting questions.
First of all, what outcomes, excuse me, what exposures do you collect information about?
And there would be a great deal of biological specimens collected. What outcomes would you
study? And the answer is apparently all outcomes basically. But the other question is, how
do you choose these women? How do you choose these hundred thousand women?
Are they meant to be representative of the whole United States? Because if they are,
they are much, much more difficult to select and enroll. Or do you basically go to places
where it's easy to enroll pregnant women and enroll lots of pregnant women because they're
easier to enroll. Okay? Because it's much more expensive to take a truly representative
sample across the United States. If you asked for some of the outcomes of interest
by age 21 in a cohort of a hundred thousand babies, how many cases of different things
would you see? This gives you some sense of how many cases of these different diseases
you would see. And for some purposes these are pretty reasonable numbers of cases, but
for a lot of other purposes these are very, very small numbers of cases despite the incredible
amount of money it would cost to do this. Um, so this study has been launched at least
in a pilot phase. You can see there are enrollment centers all around the country. But this is
still basically in a pilot phase. To basically work out all the details of what it would
take to really do this study. Um, here you can see this is a couple years
ago. It was estimated $3 billion. 600 page research plan developed by NIH staff, 30 hypotheses,
everything from pesticides causing neurologic programs to social programs influencing children's
health, etc. That was set more mid 2008. The NIH didn't
want to fund it because of the money it would cost. Congress decided they wanted to fund
it at least several years ago. So they are sort of now in a pilot study phase. But this
is a more recent article the budget for this has now exploded well beyond $3 billion.
So as far as I know this study is going to go forward. But, um, it's not clear how much
it's going to cost. Very interesting idea for a cohort study.
Okay. What I want to do is finish up with just sort of summarizing a few things that
I have to go to a different slide set about in terms of cohort studies so just give me
a minute here. And as I've said, Craig is going to talk about Monday about historical
cohort studies. So, when you think about cohort studies, they
have several obvious advantages. The first is that you can study rare exposures. And
Craig is going to talk about that on Monday. If you are interested in the effect of some
rare pesticide or some rare chemical in the computer chip business, most people in society
are never exposed to that chemical. And the only way to study the effects of that is typically
by establishing occupational exposed individuals into a cohort and looking at their health
outcomes over time compared to some other group. So you can look at rare exposures.
Obviously you can look at multiple outcomes and the same exposure.
Right? So, if you're interested in 12 different disease outcomes or a hundred different disease
outcomes, if the study is big enough to study all those different outcomes you can take
one exposure and look at many different outcomes all in the same study.
Like the nurse's health study. Clearly a key issue is you can collect the data prospectively
and minimize or eliminate certain types of bias. So in that study of HPV and *** use
when women are keeping diaries and every time they have intercourse writing down they had
intercourse and whether a *** was used you get very, very accurate data.
Right? That if you try to do that retrospectively and ask questions about those behaviors would
be fraught with error. Except in perhaps the rare individual.
So you get very high levels of accuracy about the data you're collecting. And of course
you can measure risks and rates directly. You can calculate risks and rates. Risk and
rate differences. Risk and rate ratios. Needless to say there are some downsides to cohort
studies. They generally are not very economically and
sometimes not even plausible for very, very rare outcomes. Now if you've got $5 billion
to do a cohort of a hundred thousand people for 21 years then maybe that's not true.
But they generally are not very good for studying rare outcomes. They typically are very expensive
to follow large numbers of people over time and assure minimal loss to follow up. They
can be very expensive. Obviously they can take a long time. The doctor's study took
place over 50 years. So you have to be a young epidemiologist to start some prospective
cohort studies. There may in fact be ethical issues about
following people. You may not be randomly assigning them to one exposure group or another.
But is it ethical to follow people who are doing something you think is bad for your
health? Whose doctor is doing something you think is bad for their health? It doesn't
totally remove the ethical issues just because they are not being randomized. The comparability
of those enrolled to the general population may be in doubt. That is, people we get to
come into a cohort and stay in a cohort are generally not representative of the general
population. So in a little while we'll talk about some of the findings of the nurse's
health study with regard to postmenopausal hormone replacement and the risk of heart
disease and cancer. That cohort study, the nurse's health study
said that hormone replacement therapy was beneficial to women's health.
The randomized control trial when it was finally done in fact showed the opposite.
And the general feeling is that's in large part because the nurses in the nurse's health
study are not representative of women in the general population. So cohorts may not be
representative of the general population to whom you want to make inference. If you follow
people over extended periods of time that observation process may well change their
behaviors. Right? Simply the process of studying them may change
them. And of course we do have a difficult time with preventing loss to follow up in
most cohorts. So, and often the people you lose are different
from the people who stay in the cohort. So the representativeness of the people in the
cohort may go down over time. So cohorts are an incredibly valuable type
of study design. But they obviously have some disadvantages.
Right. Now some of these disadvantages are minimized by doing an historical cohort study
like Craig will talk about on Monday. But there are some problems introduced that he
will also talk about. And then when we talk about cross sectional studies and case control
studies the advantages and disadvantages are going to be quite different. Okay?
So for any given hypothesis or study question you have to think what is the optimal study
design. And what are the strengths and weaknesses of that study design for answering that question.
So any quick questions about that? Okay. So more cohorts from Craig on Monday, cross sectional
studies and then next Friday I'll be talking about case control studies.