Tip:
Highlight text to annotate it
X
Rachael Berger: (slide 1) Hello and welcome everyone to Staying
on Track for College Readiness. My name is Rachael Berger and I�m the technical assistant
with the Oregon College and Career Readiness Research Alliance at REL Northwest. This is
part two of the webinar series Developing Savvy Students for College and Career Readiness.
This webinar is cohosted by REL Northwest and REL Northeast and Islands. Just a couple
of quick reminders; please introduce yourself in the chat window; the phones will be muted
during the presentation so all questions, introductions, and comments should be entered
into the chat window. Today�s event will be recorded and available for viewing on the
REL Northwest website.
(slide 2) We're very excited today to have two wonderful presenters: Jenny Nagaoka, Deputy
Director of the University of Chicago Consortium on Chicago School Research and Dr. Susan Fairchild,
Director Data Analysis and Applied Research for New Visions for Public Schools.
Again, throughout the presentation we want to invite you to ask questions by typing them
into the chat window. We�ll select representative questions to pose to the presenters after
each presentation, as well as during the closing of the event. We won�t be able to answer
all questions, but we�ll do our best to get to as many of them as possible.
Jacqueline Raphael: (slide 3) Good morning everybody if you're
on Pacific Time, good morning. My name is Jacqueline Raphael and I'm the alliance lead
for the Oregon College and Career Readiness Research Alliance and I�m here to give you
a little bit of background.
REL Northwest is one of 10 regional educational laboratories across the nation; each serving
a region of the country. Our REL serves five states: Oregon, Washington, Idaho, Montana,
and Alaska. As many of you may know, the purpose of that regional educational laboratory program
is to build the research capacity and knowledge of the states and districts of their region.
This work that we do is funded by the Institute of Education Sciences at the Department of
Education and it comes in five-year cycles and in this particular contract we are focusing
on working in what we call research alliances, which are groups of policymakers and practitioners
who need research, data, and evidence to address real problems that they�re trying to solve
and that they�re confronting in a real way; so it�s very practical important work and
we�re very excited about it.
(slide 4) Our alliance is one of nine here at REL Northwest. It includes representatives
from Oregon�s three key education sectors: K12, the community colleges, and the four-
year colleges, as well as members of the Governor�s office and college and district leaders. The
goals of our alliance are to focus on bringing research and evidence to bear on how we understand
and operationalize college and career readiness and how we use information to make decisions
that foster improved alignment between and across the entire system pre-K through college.
Our alliance is focused in its first year on two strands of work: one is looking at
developmental education as a potential gap in the pipeline and the other strand is dual
credit and other early college programs as well as AP and IB programs and in both areas
we are looking at how we can bring research and evidence to bear to help folks think about
these gaps in opportunities in the pipeline.
(slide 5) This event, as Rachael mentioned, is being cosponsored with REL Northeast and
Islands, which also has a research alliance focused on college and career readiness. We
very much enjoyed working with Dr. Sterel. REL Northeast and Islands held the first part
of this two-part series on May 30, 2013; the event focused on proficiency-based teaching
and learning and can be viewed at the event archive address at the bottom of this slide.
Today�s event will also be archived and the link will be sent to you by email if you
registered.
We are building on what we talked about last time looking at how measures can be combined
and used to quantify and determine college readiness. My last point, after this event
there will be for those of you who registered, an email in your inbox with a survey. It�s
not too long and it�s extremely important and helpful that our funder gets feedback
about events such as these; I hope that you�ll be able to respond. Thank you.
Rachael Berger: (slide 6) Today we're going to have a short
amount of time at the end of each presentation for questions and answers from our participants
and we�ll also have an additional chunk of time for discussion between our presenters
as well as Q&A. Please type those questions and comments in the chat box whenever you
have them and we�ll facilitate answering those questions during the Q&A time.
(slide 7) We�re really excited about today�s event; it focuses on the essential aspects
of our alliance�s work by examining how and what to measure for progress toward the
goal of college and career readiness. There is currently a great deal of national work
and attention in this area and we�re just beginning to understand the complexities of
this work in developing tools which schools and educational institutions can use to support
students at all levels.
(slide 8) We want to begin by asking our participants a couple of questions to provide context to
our presenters. First let us know which region of the country that you are primarily working
in: the Northwest, West, Pacific, Central, etc. Thank you, I see a lot of people responding
already.
We have representation from Northwest, West, as well as Northeast and Islands, Midwest,
Mid-Atlantic, Appalachia; a wonderful diversity, thank you so much. We will post the results
for you to see. The next question is please share with us your primary role in the organization
where you work. This information is really helpful to provide context to our presenters.
I see people coming in from the research area, it�s great to see a lot
of teachers joining us, and postsecondary faculty, a good deal of representation from
school district administration.
We have a real diversity of roles today: many teachers, postsecondary faculty members, school
and district administrators, researchers, representatives from state government and
nonprofits as well; wonderful, thank you so much.
(slide 9) Our first presenter for today is Jenny Nagaoka, Deputy Director of the Chicago
Consortium. Ms. Nagaoka has conducted research on policy and practice in urban education
reform for over 15 years. She has focused her research specifically on developing school
environments, instruction practices that promote college readiness and success.
(slide 10) She'll provide an overview of college indicator research and practices and will
discuss the challenges of developing and implementing the CRIS tool. Welcome and thank you for being
here Miss Nagaoka.
Jenny Nagaoka: (slide 11)Thank you so much. Today I'm going
to be talking about effective indicators and I'm going to actually be starting out using
a specific example of work that the Consortium on Chicago Research and the Chicago Public
Schools have been working on. This is actually the work that we�ve been doing on the on-track
indicator and in a lot of ways has been going on for over 15 years and I think this is really
taught us a lot in thinking about what an effective indicator actually is.
One of the big lessons that we�ve taken away from all of our work with the Chicago
Public Schools is the fact that a lot of what makes an effective indicator is not just about
finding the best possible indicator, but you really have to be embedded in thinking about
how it is being used, how is it actually part of a larger data system and really thinking
a lot about how data is being used across the district and particularly within schools.
This has a lot to tell us about college readiness indicators, we�ve learned a lot from our
work on the on-track indicator, but really thinking about what makes for an effective
indicator is considerably more complicated than figuring out a good predictor of high
school graduation.
I�ve been thinking about what actually makes kids have better access to college and what
makes them successful once there for the past seven or eight years and more recently my
work has been focused on college readiness and particularly college readiness indicators
through work that has been funded by the Gates Foundation in partnership with the John Gardner
Center at Stanford and Brown University�s Annenberg Institute for School Reform.
(slide 12) I just want to give you a little background on the Consortium on Chicago School
Research in case you don't know about us. We're based at the University of Chicago and
we�ve been doing research on the Chicago Public Schools. We actually had some debate
about this yesterday, depending on how you counted our 25th anniversary is either next
year or the year after that, so we�ve actually spent a lot of time really getting to know
the Chicago Public Schools and we see our role as providing the practitioners and policymakers
in Chicago and across the nation with insight that really helps them do their work more
effectively.
We are not the answer people, we really see ourselves in partnership with the Chicago
Public Schools and helping them figure out what the solutions are to the problems that
they face and we see ourselves in the role of building the capacity of schools to address
their problems more effectively. We have a lot of different types of research that we
do and different types of activities. One of the big things that we spend a lot of time
doing is thinking about what actually matters for student success and school improvement.
In the case of thinking about high school graduation, we can realize that the transition
to high school is a critical juncture in students� academic careers and so this is a part the
actually matters a great deal. Through that we also want to have indicators so we can
chart what is actually happening around these things our research has determined matter
for student success and school improvement; the on-track indicator is an example of this.
We also spend a lot of time thinking about frameworks because indicators can help us
measure it, we can figure out what matters, but it needs to be actually connected to a
larger sense about what�s happening. We�ve done work, for example, with the five essential
supports for school organization that has really helped schools think about what they
can be working on to help them be more effective.
Finally a big part of our work is about communication, we do publish an academic journal, but in
many ways we really see the primary audience as being the people doing the work in schools
and across the district so it becomes really important for us to take our findings and
make sure that we are communicating them in ways that are really accessible to the more
general public audience.
As well as not just having them be kind of esoteric research findings, but really translating
them into something that becomes much more actionable in the real world. We spend a lot
of time thinking about how we put together our publications, we spend a lot of time doing
presentations, like what I am doing now, and another thing that we spend a lot of time
doing is putting together individual school reports.
We�ll actually take our research findings and use the indicators that we�ve developed
and then create reports for schools that give them their specific standing on how they look
on indicators, break it down by different subgroups and show comparisons about how they
are doing versus the rest of the district. This is a way for schools to actually see
what our research means for what�s happening inside of the buildings.
(slide 13) I�m going to be talking to you today about the on-track indicator and as
I mentioned this is something that we've been working on since the 1990s and in many ways
this goes back to Melissa Roderick, who is one of our directors, work in her dissertation
so it is something that people around us have been thinking about for a long time.
In the 1990s, we had a report called A Student Speaks that was looking at what was going
on inside of high schools that was causing students to be dropping out and one of the
things that became really clear through this work was how ninth grade was a pivotal point
for students as they were making a transition to a new setting and so when kids were failing
their classes in the ninth grade this was something that was really difficult for them
to overcome.
As we were thinking about this, we realized that we could tell schools that ninth grade
is really important, that core failure is a bad thing, but that's not necessarily the
sort of insight that�s going to help them change their practice or actually see if they
are improving on working with kids as they are transitioning into high schools.
We developed the on-track indicator; the on-track indicator is an indicator of whether or not
students have failed no more than one semester course in ninth grade and that they�ve actually
accrued at least five full-year credits by the end of ninth grade, which would enable
them to be promoted into tenth grade. We had this indicator, we�re looking at the general
relationship to graduation and it seemed like this was a good way to measure whether or
not kids were likely to graduate.
Later on we produced more work around validation and at the end on the far right that�s an
example of an individual school report that we produced around the on-track indicator.
(slide 14) We have thought a lot about what makes for an effective indicator. We feel
that one of the reasons on-track has been successful is that it�s really addressing
a problem that is a high priority for district leadership as well as leadership within schools
and for teachers and students and parents as well; everyone can agree that high school
graduation is an important thing to be thinking about.
When Arne Duncan became CEO, he put together an education plan, which laid out what the
priorities were for his administration and in 2001 one of the things that the Education
Plan noted and saw as a big problem facing the district is the fact that 40 percent of
13-year-olds were not completing high school by the time they were 19.
It was really clear that dropping out was a serious issue inside of the district and
the Duncan administration was looking for ways to improve on graduation rates. As they
were looking around and trying to find ways to work on high school graduation, they noticed
the fact that we had developed this on-track indicator and it became a part of the accountability
system.
On-track went from this esoteric indicator based on our research to something that schools
were being told that they really needed to pay attention to.
(slide 15) In some way this was kind of exciting because now our work was actually becoming
a part of what was happening inside the district and it was being used, but the alarming part
of this was that we actually hadn�t done any sort of real validation of this indicator.
John Easton was the executive director of the consortium and he was very concerned because
the on-track indicator was out there, schools were being held accountable for it, we had
developed it, but we actually were not sure about whether or not it was a strong predictor
of high school graduation.
(slide 16) Fortunately for everyone involved, it turned out that when we did a more rigorous
analysis it actually turned out to be very predictive of high school graduation. When
we look at kids who are on track at the end of ninth grade, 81percent of them graduated
within four years versus only 22percent of kids who had been off-track at the end of
ninth grade.
And then these numbers grow a little bit when we look at five-year graduation rates, but
when we looked at this it became clear that on-track was a really useful tool in terms
of identifying who was likely to graduate and who was not. Although that was encouraging,
I think one of the first reactions that a lot of people had when they saw this was,
�OK, so you've got this fancy on-track indicator isn�t this the same thing as test scores?�
You could probably look at a kid�s test scores at the end of eighth grade and have
a pretty good sense about who was going to graduate and who was not.
(slide 17) One of the other pieces of this work was making sure that the on-track indicator
was actually telling us something that the information schools already had was not.
(slide 18 ) We took a closer look at how on-track and test scores were breaking down; this slide
shows a somewhat complicated figure and it�s looking at the probability of students graduating.
On the far left side are kids whose eighth-grade achievement was in the bottom quartile and
on the right is the top quartile.
So in the bottom quartile only 42 percent of the students were on-track, but if you
look at what happened to the students who came in with low eighth-grade test scores,
but were actually on-track at the end of ninth grade 68 percent of them graduated four years
later. When we look at the other end of the spectrum at the top quartile and the students
who were off-track only 37 percent of them graduated four years later.
When we look at this it really becomes clear that how kids are performing in their classes
and whether or not they had actually passed them was telling us something that just looking
at their test scores did not.
(slide 19) Another reason why the on-track indicator has been really useful in this district
is the fact that it�s malleable; it�s something that school practiioners can actually
change.
(slide 20) When Arne Duncan first made high school graduation a priority, the conversation
was really different. When we talked to principals and to teachers and we asked, �What can
you do about preventing kids from dropping out of high school?� they generally went
to things that were at the extreme.
They talked a lot about gang violence, they talked about teenage pregnancy, they talked
about kids coming from really dysfunctional homes; all of which are serious issues, but
those are largely things that are external to school. One of the things that was really
useful about the on-track indicator was that it actually brought the problem of dropping
out into something that was under the control of people within the school.
(slide 21) We put together all kinds of different demographic information including eighth-grade
test scores, whether or not kids had changed schools in elementary school, whether or not
they were old for grade, we looked at race ethnicity, we looked at a lot of factors and
we were only able to predict who was going to graduate in four years with 65 percent
accuracy and that is compared to just taking the on-track indicator, which is at 80 percent
and when we combine all of those and it only goes up 1 percent to 81 percent.
Just this one little indicator, on-track, is actually calling up a lot more than all
kinds of different pieces of information about students and how they look going into high
school. All of these other things are things that high schools really had no control over,
you can�t really do anything about eighth-grade achievement, you can hope you can increase
achievement, but you actually can�t change the fact that there are going to be kids coming
into high school who are low achieving and really have them focus on something that was
much more under the control of high schools.
They can look at the kids who are coming in to their buildings and start thinking about
what can we do to help them, what can we actually do to help make sure that these kids are passing
their classes? It really transformed the sorts of conversations that were going on across
the city about what you can do to prevent high school dropout.
(slide 22) The other piece that was really important about the on-track indicator was
the fact that it was a concept that was pretty easily communicated and that practitioners
actually had some strategies for what they might be able to do to have more kids pass
their classes.
(slide 23) One of the other follow-ups that we had to the original on-track report was
thinking about what actually mattered and what actually predicted high school graduation
besides on-track and would predict whether that kid was going to be on track in ninth
grade and one of the things that really stood out was the role of absences in making sure
that kids are passing their classes and ultimately whether or not they were going to graduate
from high school.
In this figure we are looking at the number of days kids were absent in one semester and
we also included course cutting and we added that up to eventually become a full day if
they had cut enough classes. If you look at students who have been absent less than a
week per semester, 87 percent of them graduated and that�s compared to 63 percent of students
who had been absent for over a week between one and two weeks of school in each semester.
The difference of just a few days of school actually told us a lot about who was likely
to graduate and who was not.
Starting to pay attention to whether or not kids were coming to school and whether or
not they were going to classes, really became something that teachers and principals started
paying a lot more attention to and they were seeing that it was actually related to an
outcome that they cared about and they were starting to organize themselves around the
idea that it was important for kids to be coming to school.
(slide 24) Another thing that the district did in 2008, which was just after the What
Matters Report came out, was that they developed online early warning reporting systems that
were set up to help schools identify who was at risk of falling off-track taking different
characteristics that kids had prior to ninth grade.
Schools could take a look and see whom they should be paying additional attention to,
to make sure they were passing their classes. In addition to this, they also put together
a system that identified kids who were failing at 5 weeks and looking at their absence rate.
There is a lot of information that was being gathered after students had started ninth
grade; a teacher could log in see who was failing, see if a kid was failing not just
their class, but other classes, and really start thinking about what was happening to
cause kids to not be doing well in their classes.
The Chicago Public Schools also put together a series of credit recovery reports, which
were for upper classmen and that system flags students who were behind in a number of credits
that they were going to need in order to graduate. What happened with all of this? The Chicago
Public Schools put out a lot of effort and initially it was focused around accountability
and later on they really started focusing on this data system to help schools think
about on-track.
(slide 25) On this slide we see on-track rates across the district from 2003 to 2011. Accountability
went in around 2003 and we see that actually not a whole lot happened for quite awhile
that accountability was initially how people were thinking a lot about how you actually
use something like an indicator more in a punitive sense and that actually did not seem
to do a whole lot to change on-track rates across time; in fact we see this little dip
initially. Where we actually start to see really big increases in on-track rates was
when the Chicago Public Schools put together these data systems that were giving schools
ways to actually engage this problem.
See which kids were struggling, which kids who were at risk of falling off-track and
this is really a place where the power of data became more apparent because initially
it didn�t seem like just providing information would be enough, but the data that the district
was providing was helping teachers think about a problem more systematically than they had
in the past.
Before we talked to teachers about this, they had been concerned about course failure, they
had been concerned about absences, but they really didn�t have a good way of thinking
about the big picture, about what was happening with students, and then when they actually
had these online systems they were able to see the patterns of behavior that students
were having and so that really made a huge difference in what was happening.
(slide 26) Why was this effective? One of the really important pieces about this was
the fact that this is a problem that was a high priority in schools. It was a high priority
for the district, but it was something that principals and teachers were really struggling
with and the data systems that were developed were really set-up not just to provide a whole
slew of information, but it was really targeted for people to be solving something that they
were already trying to do.
It was a way for people to actually do their job more effectively as opposed to just being
one more thing that's on the long list that principals and teachers have to be trying
to incorporate in their daily work lives. One of the other exciting pieces of this is
that it really started changing how people were thinking about a problem.
As much as I love data, data is not the magic solution that�s going to solve all your
problems and immediately you look at the data you�ve got more information summaries, you
know exactly why a kid is failing or why he�s not coming to school, but it really gave a
good start for practitioners to start to think about, �Ok, so Johnny seems to be absent
on Mondays and Fridays, what's going on this is different than being absent all the time
or is Mary absent just first period?� and so maybe it�s more of a problem of not being
able to get up in the morning so if she is missing her first period algebra class sending
her to algebra tutoring is probably not going to be the solution to not getting up in the
morning.
It really helped practitioners engage in the problem based on better information. Before
the idea of trying to address the problem of dropout seemed really overwhelming to people,
there are so many factors that go into why kids drop out, how do you actually even begin
to address it without creating these large systems that are involving community support,
figuring out ways to prevent teenage pregnancy, trying to reduce gang violence?
All of those continue to be problems, but the fact that now practitioners had leverage
points that were within their schools really changed how they were thinking about it and
the sorts of strategies that they were using to do something about course failure and school
attendance.
Finally there�s just a lot of feedback that was going on and the data was really something
that was actionable, schools could take a look at it and start to think about what they
might be able to do about the fact that a kid was absent, which is much easier than
trying to solve all kinds of different factors that are external.
(slide 27) This is another part of my life that I�ve been doing a lot of thinking about
for the past seven or eight years and thinking about college readiness and how do we actually
help more students enroll in college and also make sure that they're graduating from college.
At CCSR we�ve had a series of reports, we started out trying to think about the academic
factors that are barriers to college access and college graduation, we've also done a
lot of work thinking about the support side so what are the barriers that kids have in
the college enrollment process and we�ve also done a lot of work around ACT.
(slide 28) What can we learn from this experience that�s gone on with the on-track indicator
that�s really going to help us think about college readiness? I think one of the things
that really strikes me about college readiness is just how much more complex it is than trying
to figure out how to get more kids to be on-track so that they�re graduating from high school.
Maybe this is just the way it is when we don�t know what to do because I think this is how
people felt about high school dropout in Chicago 10 or 15 years ago; where do you actually
begin? One of the things about college readiness is that it�s something that people are talking
about across the country; all kinds of districts and states and schools and everybody is really
trying to figure out college readiness so there�s a huge appetite for improving college
readiness, but I think in many ways you really have not gotten a handle on what this means
because I think it�s so much more complicated.
One of the things we really care about is making sure that we�re developing indicators
that are predictive, but what makes this sort of work hard is we don�t actually know for
sure what our outcome is and it�s very difficult to do a good analysis if you are not positive
of the outcome that you�re trying to predict is.
There is a lot of work that�s been looking at college readiness in terms of college enrollment,
a lot of people talk about college readiness in terms of whether or not kids are able to
enroll and pass credit bearing courses, college graduation is clearly the ultimate goal and
the other wrinkle that goes on with higher education is that there�s just so many different
types of colleges.
What it takes to be ready at local community college is really different than what it would
mean to be college ready at an elite school; it also is very different for what it takes
to be successful if you are going to a small liberal arts school versus going to a large
state university and that�s above and beyond just academic qualifications. The complexities
of what is going on in higher education are quite a bit different than what is happening
in high schools.
There is the whole idea about what is on-track to college readiness mean? We want kids to
be college ready, but where does this actually stand at different grade levels? This is actually
something that Susan (Dr. Susan Fairchild) has done a lot of great work on and will be
showing you a bit about in the next presentation.
(slide 29) Another thing that�s really important is that we�re actually getting information
that is above and beyond other characteristics. There has been a lot of work looking at college
readiness in terms of test scores, but test scores are quite often not providing us more
information than what we would know from demographics, they provide some information but they are
not necessarily the most predictive so that calls into question what we should be doing
there.
Another really big question is around this whole idea of malleability; are the different
college readiness indicators that we�re considering actually things that schools can
change? Are there actually strategies for improvement? I am skeptical of test scores
playing this role because in Chicago there�s been a tremendous amount of effort to try
to improve ACT scores for the past 10 years and we�ve seen an increase of one point.
It�s good to see progress, but that is not a lot of payoff for a lot of work that�s
been done within schools by teachers and students and a lot of resources being put forth by
the district; it made me start to wonder how malleable are test scores really.
(slide 30) That is where I come to from thinking about the on-track work and the questions
that it raises for me around college readiness. In the work that CCSR has been doing with
the Annenberg Institute and The Gardner Center, we�re thinking about college readiness indicator
systems and we�ve split the idea of college readiness into three different dimensions
and I think this is a really useful way of encompassing a lot of what we actually would
want from our college readiness indicator systems.
We set these up as being three different dimensions, but as I�m sure it is pretty clear from
looking at them, these are really interrelated. Traditionally when we are thinking about college
readiness, we�re focused a lot around this idea of academic preparation; test scores,
GPA, what courses students take, and all these things are clearly important for college readiness,
but just those things alone are not really what�s going to make a student be successful
once they are on a college campus.
There�s a lot that goes on with this concept of academic tenacity and in our work at the
consortium we refer to a lot of these same ideas as noncognitive factors; so above and
beyond what you know and what you can do, things like having strategies for actually
engaging and learning, being able to organize yourself around studying and things like mindset.
Do you actually believe that you can do the work and are you going to stick with something
when it�s difficult?
All of these things are part of what make people successful in college and actually
much more successful probably in life in general. Whether you�re taking calculus for the first
time or you are taking ceramics for the first time, if you�re actually willing to stick
with it and you have strategies for helping yourself learn, those are the things that
are going to help you be successful.
The final piece is college knowledge, which is something that David Conley has done a
lot of work around and this is thinking about do students actually know how to get to college,
do they know how to make good college choices, and are they good at navigating the systems
that they are going to encounter once they get to college?
(slide 31) As we were thinking about academic preparation, this is a figure that was in
Crossing the Finish Line, which was a book that came out a couple of years ago, the authors
were thinking about what actually matters more for six-year graduation, SAT scores or
GPA?
This is one of the things that we have been struggling with is thinking about what actually
are the right sorts of indicators for thinking about college readiness and what should schools
be paying attention to and what should people in higher education be thinking about as they�re
trying to figure how to develop their incoming class and how they should think about placement.
In this work, they looked at six-year college graduation rates by different SAT scores;
these are six-year graduation rates at state colleges in North Carolina. When they broke
students down by SAT scores controlling for GPA, we�re seeing that there�s actually
not a whole lot of difference in student�s probability of graduating from college within
six years and this seems really surprising because in many ways SAT was supposed to be
designed to determine who was likely to graduate, but once they controlled for GPA and demographics,
SAT was not providing a whole lot of additional information. This is in contrast to looking
at GPA, so we look at GPA controlling for SAT scores, there�s actually a really big
difference just by students who are within one grade point of each other. If you look
at the students who are toward the end but 39 percent of them are graduating within six
years as compared to 72 percent of students who graduated from high school with at least
a 3.67 GPA. This is work that we�ve had similar results in our work and there�s
a lot of other people who have had similar findings that it really looks like GPA matters
a great deal more than test scores for determining who is going to graduate.
(slide 32) What do grades measure that test scores don�t? This is where the noncognitive
factors connect and tenacity comes into play because what it really looks like here is
that grades are measuring something that test scores are not. Tests are designed to measure
content knowledge and academic skills and grades are also intended to measure what kids
know and what they are able to do, but grades are also capturing the noncognitive factors
or academic tenacity.
Teachers are also giving students grades by things like are they actually engaging in
the work, are they turning in their homework; all these things are incorporated into that
and that tells us a lot more about who is going to graduate from college versus who
is not than just test scores alone.
(slide 33) Another piece of this that is important is college knowledge, and this is work from
Potholes on the Road to College. We were looking at the different stages that students go through
in the process of enrolling in college and this is really showing how students struggle
throughout the whole process.
If you are taking kids who are saying that they want to get a four-year degree or higher
only 72 percent of them are actually responding to a survey and saying that they are planning
to attend one in the fall. We�re losing more kids when it comes to applications, so
even though they are students who are saying that they plan to attend college they actually
don�t apply to a college by June of their senior year.
There�s also another set of kids who apply, but are not accepted although this is not
nearly as big of a drop-off as you might expect given the fact that in Chicago qualification
tends to be really low. Then we look at students who are actually enrolled and we see another
drop off here and it looks like a piece of this might be financial aid; that there�s
a lot of kids who are accepted to a four-year college, but on the same survey they are saying
they have not submitted their FAFSA.
This is true of kids across different levels of achievement; knowing how to get to college
is something that kids really struggle with.
(slide 34) This slide summarizes a lot of the work we�re doing right now around college readiness.
Each of these lines represent a college that is commonly attended by Chicago Public School
graduates and it is showing their six-year graduation rates by their graduating high
school GPA, controlling for ACT scores, race ethnicity, gender and other demographic characteristics.
The first thing that�s really striking is the fact that not surprisingly the higher
your GPA the more likely you are to graduate from college.
The second thing that�s really notable is the fact that if you look at kids who graduated
from high school with a 2.0 they are unlikely to graduate regardless of where they go to
college. These are kids who you would probably say are not college ready; it doesn�t matter
where they go, what's going on, the probability of them graduating is really low.
The most striking part about this graph is taking a look at the other end of the spectrum
in kids who are graduating with the 4.0 from high school and this is actually where we
are seeing the most variability.
You�d think that a student who graduates from high school with a 4.0 is going to be
successful regardless of where they go to school, however that bottom black line is
Northeastern Illinois University when students who graduated with a 4.0, all things being
equal, their probability of graduating from college in six years is around 30 percent.
This is compared to students who go to Northwestern University where almost 100 percent of them
are going to be graduating.
There are probably some issues with selection, but this part of the follow-up work that we�re
doing now. These findings hold, you can take two kids who look the same and who in many
ways look college ready and depending on where they go to college, their probability of obtaining
a degree is going to vary significantly.
I think this where this whole concept of college readiness becomes extremely complicated because
if you can take two kids who are identical in all observable ways and where they go to
school is going to determine whether or not they get a degree, it really makes me wonder
is college readiness enough and we also need to be paying a lot of attention to college
choice. We can�t have kids who are going to be so college ready that no matter where
they go we know that they have almost 100 percent probability of graduating.
This makes me think that the concept of college readiness is a certainly the responsibility
of high schools, but there is also a really big role in thinking about what�s going
on in higher education and why is it that we have kids who have been very successful
in high school, but may not be nearly as successful in college.
Rachael Berger (slide 35) Thank you so much Jenny for that
very rich presentation. We�ve had a lot of excellent questions come in. We only have
time for a couple of minutes of questions, but we will try to get to rest of the questions
with what time we have at the end. One question that we received: For the online reports,
do you have a sense of how often schools look at those reports and that they are used?
Jenny Nagaoka That�s a good question. To be honest when
we talk to teachers and principals we are tending to talk to the people who are most
data savvy and those people are looking at them on a regular basis.
For other teachers who may or may not be as savvy, I don�t know. One of the things that
I did not mention that�s really important to know about this is that CPS really put
forth a lot of effort to actually have PDs to develop teachers� capacity to be able
to actually use this information.
Just having a great data system is not enough; it�s really important that it is accompanied
by a lot of interaction between the people who are developing these data systems and
the users.
Rachael Berger (slide 36)Thank you so much. We will get to
some additional questions towards the end of our presentation. I want to introduce our
next presenter, Dr. Susan Fairchild, Director of Data Analysis and Applied Research for
New Visions. Dr Fairchild joined New Visions for Public Schools in 2009 where she supervises
the development of reporting design to convey student performance to a wide variety of audiences.
She oversees the implementation of a state-of-the-art student information platform in New Vision
schools and she is going to share this work and practices for educators in using indicator
tools. Welcome, Dr. Fairchild.
Susan Fairchild: Thank you for inviting me to share some of
the work that we are doing here at New Visions, I�m delighted to be here. Let me give a
quick rundown of how I will approach the next 20 minutes; first many of you might be unfamiliar
with New Visions for Public Schools, so I�ll spend a few minutes talking a little bit about
the work that we do here.
I�m then going to move right on into our first phase of early warning work that spans
from about 2007 until about 2010 and describe some of the core components of that work.
It�s pretty important to take a few minutes to talk about Phase 1work if for no other
reason than Phase 2 builds on it.
Our Phase 2 work really is about the reconceptualization of risk and how that reconceptualization has
important implications for how we structure early warning systems and how we think about
career and college readiness. Finally, I will end with our new school-level stock and flow
tool that we�re just beginning to roll out.
(slide 37) Let me take a minute just to describe where our data work takes place. New Visions
was established in 1989; it�s a nationally recognized school reform organization.
We�re based here in New York City, which is home to approximately 1.1 million public
school kids. Many of you might know of New Visions through the small school initiatives
that started back in 2001. We hope to open around 100 small schools of choice and a recent
MDRC report found that these small schools have sustained a 10-year/10-percentage point
increase in student graduation rates. In 2007, we became a partnership school organization
for New York City�s Department of Education and we currently are providing operational
and instructional support for 75 schools in our network.
Then in 2011 we became a charter management organization; so far we have opened four charter
schools and next year we�re going to be opening two more, so together our PSO and
CMO serve approximately 42,000 students. We�re about the size of San Francisco�s school
district. We primarily work with high schools and 75 percent of our students are in small
or large high schools, about 14 percent of our kids are in years 6�12 schools.
(slide 38) Here you can see that we serve a fairly high-need student population. In
our partnership school organization, our PSO, 77 percent of our students receive free or
reduced-price lunch and in our CMO that number goes up to about 83 percent. About 15 percent
of our students are special education and about 11 percent are English language learners,
about 74 percent of students in our PSO schools are black and Latino; this goes up to about
95 percent in our charter schools.
(slide 39) From 2007 until 2010 we were focused on putting in place the foundational components
of our early warning system.
(slide 40) The problem that we were grappling with was and continues to be that when students
fail to hit critical benchmarks, everything that we just heard Jenny just talk about,
high attendance, the continuous credit accumulation, high grades, the likelihood of them dropping
out of high school or not enrolling or persisting in postsecondary institutions increases.
I think Jenny did a really beautiful job of highlighting that and we know that kids who
are on-track to graduate by the end of their freshman year are 3.5 times more likely to
graduate in four years than students who are off track by the end of freshman year.
Relatedly, we also know that students who are off-track freshman year are more likely
to drop out of high school, so the Chicago research really was foundational in terms
of us thinking about this problem. In addition to that we also know that not graduating on
time and failure to hit critical benchmarks this is made a lot worse when you have continuously
changing graduation requirements; New York City�s graduation requirements have changed
seven times in the last 11 years.
It is made worse when parents and students who for various reasons don�t know what
those graduation requirements are. Here in New York City English is not the first language
of many of our parents and students and then when school staffs don�t have access to
great data systems or the data systems are fragmented, all of these things compound the
problem.
This has been the premise upon which many of our first generation early warning systems
have been built. This work is really about integrating data systems so that we are catching
students who are the verge of dropping out or on the edge somewhere of not graduating
on time.
(slide 41) Once we wrapped our heads around this problem the goal was how do we make this
problem less mysterious for educators, for students, for parents? How do we make this
problem visible? How do we combine our core student data, our credits, our grades, our
exam scores, in such a way that we�re quickly identifying when kids are close to that edge?
This work really reminds me a lot of the work that came out of health care, the Institute
of Medicine, there�s a big important report To Err Is Human where what they found was
medical errors are often times the problem of fragmented systems, people not talking
to one another, really basic foundational things and that is really what reminds me
of what we were doing here in this Phase 1 work.
We did three things: the first thing we did was create clear benchmarks, then we created
multiple tools for multiple audiences, and the other thing you�ve got to do is you�ve
got to provide data in an integrated system that is real time, and we moved many of our
tools over to the DataCation platform. In short, what we did is we organized ourselves.
Think of it like financial accounting, we figured where are we in the red, where are
we in the black, something very fundamental about what we were trying to do in this first
phase of work.
( slide 42) I want to spend just a few minutes talking about benchmarks. This is the first
big piece of work that had to be put into place; it really does underpin most of our
data work and it is a useful if not somewhat blunt measure that we use to characterize
different levels of student achievement within and across our 77 schools.
What we find is that it�s pretty sensitive, but it�s not very specific. The rest of
this presentation is going to be a little complicated if we don't have a general understanding
of how this benchmark system works. We created a four-category, color-coded indicator system;
this is a traffic light system, so blue is on-track to college readiness, green is on-track
to graduate based on our high school graduation requirements, yellow is almost on-track and
red is off-track.
What we are doing in this metric is we are combining overall credit accumulation, we
are combining that with our core subject credit accumulation, at the same time we�re also
looking at Regents exam scores and semester sequence. This metric is based on evenly paced
credit accumulation throughout high school with students earning 11credits each year.
(slide 43) The second key element has been the development of data tools that speak to
different audiences. What school staff needs to see is not necessarily what parent and
students need to see or what teachers need to see.
We try really hard at New Visions to make the information that is conveyed in these
data tools visually accessible and user friendly at any level of aggregation, whether or not
we�re talking about students or whether or not we�re talking about principals. Our
goal for any stakeholder is to use our data tools to simplify graduation and postsecondary
requirements.
The goal is to aggregate and disaggregate very easily; it�s to facilitate conversation.
What you are looking at here is our college readiness tracker; we took credit accumulation
requirements, attendance, college readiness metrics, and put it in a format that is really
easy for students and parents to understand.
We have seen this tool used over and over again to facilitate conversations between
school staff and individual students and parents mainly in support of goal setting and really
heightening awareness.
(slide 43) Here what you are looking at is our ninth-grade tracker; it is often the case
that credit and assessment data aren�t available until the end of first semester of high school;
this is a long time not to intervene.
What we did is we created a ninth-grade tracker to capture marking period data and to begin
cultivating awareness around the importance of credits, grades, attendance, and exams
as it relates to staying on track for graduation. We wanted our freshman right from the get-go
to be aware of some of these indicators and the importance of these indicators for their
future.
(slide 44) By 2010 we really had become victims of our success. We could no longer keep producing
these data tools at the rate that schools needed them, so we partnered with DataCation.
This was a young group of teachers who developed an innovative technology that had multiple
portals: a grade book and assessment portal and also an NCLB reporting portal, a student
and parent portal.
So, we moved all of our data tools over and I�m happy to say that as of today, we�ve
just released a report on the adoption of DataCation; it has been rather successful
initiative. In 2009, DataCation only had 23 schools; now it is being used in 400 schools
in New York City, 3,000 schools nationally in five different states in a short period
of time. We�re happy for them.
(slide 45) In Phase 2 we�re building on all of our Phase 1 work. We simply cannot
reach the place where we are now without all of the time and effort that we put into our
core components.
(slide 46) Phase 1 was about catching students who were on the edge of something unfortunate
happening like dropping out or not graduating on time. We have been focused on a student
at a moment in time and we have tended to use the word at-risk to describe a student
at a moment when that symptom has presented or is about to present. Phase 2 is about the
interconnectedness of schools and students; all parts of a school are connected.
The ninth-graders who are at risk of dropping out and the seniors who are at risk of not
graduating on time; they are not independent of one another. They vie for the same resources
from the same educators in the same school. Phase 2 is beginning to look at the school
structures that systematically produce risk and that tend to be less visible than the
student who is falling of track.
What we just saw, Jenny just put up a very powerful visual of that last line looking
at college choice, but the powerful message of that slide is that our academic institutions
can both amplify and diminish performance and we need to really understand the school
level here and the role that the school plays in shaping student outcomes.
(slide 47) You'll find if you read any of our materials that we are strong believers
in system thinking. The other element that we�re exploring here is a phenomenon called
�shifting the burden� what happens when we don�t address the root cause and only
look to fix immediate problems.
Many of you who are in postsecondary certainly understand the impact of having to remediate
kids who graduate and are going into postsecondary careers, this shifting the burden is a very
real phenomenon for you all.
(slide 48) Here we have a basic table representing longitudinal data for a school, we�re looking
at student achievement across eight semesters for a single cohort of kids. From first semester
all the way to diploma, we see how many numbers of kids are blue or green or yellow or red.
It is a basic table, but in the language of system thinking we�re being very specific
when we use this word stock; a stock is an accumulation, a stock is a noun. What this
table is showing us is an accumulation of achievement that has built up over time, but
there are also some limitations with this.
As a static moment in time measure stocks don�t capture a lot of important information
like a student�s trajectory. We can begin to address the deficiencies of a simple two-by-two
table by adding data attributes that characterize movement. Intuitively it�s obvious that
two students who end up earning a Regents Diploma might have taken a different pathway
to get there.
Our stocks, the simple table we looked at on the previous slide, it doesn�t capture
this movement, so if a stock represents a specific moment in time then our flows represent
the dynamic quality or the movement of students between semesters; from this we get two new
attributes; volatility and direction.
Volatility is a number of category transitions, direction tells us whether or not those transitions
are up or down. If you look at that first kid, the on-track to graduate, we see that
first kid has two major transitions: in second semester and then again between fifth and
sixth semester. The second kid has five transitions and so even though they end up at the same
place, we clearly see that there is something dramatically different happening with these
two kids that you�re not going to get if you are just simply looking at your stocks.
I could be wrong, but I feel like this particular slide helps to visualize what Jenny means
when she talks about the predictive power of the GPA. That GPA is sort of inherently
getting at volatility and direction.
(slide 49) Now, what we can do is we can take a slice and we can look at the movement between
semesters.
For this example, we�re looking at movement between the third and fourth semester, we
see in this visual the kids moving up and kids moving down; this visual alone gives
a whole new appreciation for at risk and vulnerability.
We traditionally define at risk as the red kids and also the yellow kids because of their
close proximity to the red off-track kids, but when we look at this visual we clearly
see that all kids are at risk, but in different ways and at different moments in time.
The yellow kid who moves up to green is hardly out of the woods, this student is at risk
of not being able to maintain and sustain that progress. The blue on-track college ready
kid who moves down to green is certainly at risk and is showing evidence of lower performance.
The green on-track kid who moves up to blue is at risk of not being able to sustain that
momentum. There are a lot of different scenarios that we as educators need to be very mindful
of.
(slide 50) Now when we begin to combine our stock and our flow, we have a lot more information.
If we look at that almost on track category, we went from 32 kids at the end of the third
semester to 52 kids in this category by the end of fourth semester.
But now we can clearly see that the increase in yellow at the end of semester four is not
the result of the red off-track kids moving up, but it is the result of the green and
blue students moving down. That is a really powerful piece of information.
(slide 51) And then when we put it all together stocks and flows produce an important perspective
for schools. We see the continuity and the flow of student progress over time. This chart
here that you're looking at represents a single cohort of students.
When we began to compare these progress-to-graduation maps over time for multiple cohorts, we begin
to see something rather profound. By comparing different cohorts, that is by looking at these
different maps and holding them up next to one another, schools can begin to identify
where interventions are needed for the subsequent cohort. These maps begin to show the presence
of feedback loops in a school that are driving school and student performance.
We�re giving this the rather dramatic name of structural volatility and let me just explain
that for a minute. Structural volatility is a feedback loop; a feedback loop is a closed
system that is it captures a circular relationship where X causes Y that then loops back around
and causes X.
This is different from the linear relationship which says X causes Y. What this means is
that school structures shape student performance, which then shapes school structures. It means
that schools influence students and then students influence schools and this is a loop; one
reinforces the other.
It also means that schools induce student movement and this is both purposeful and inadvertent.
For example, a purposeful intervention is when a school adopts intensive credit recovery
for seniors who are at risk of not graduating on time, this intervention also has inadvertent
or unintended consequences because we are so busy making sure our seniors graduate on
time we unintentionally draw resources away from our freshmen, sophomores, and juniors;
all parts of a school are connected.
(slide 52) So, we have recently developed a prototype stock and flow tool. I'm going
to spend a few minutes walking through this tool.
It�s interactive and it�s up on our New Visions website along with several blogs and
reports that document this tool.
I'm presenting it to you in a PowerPoint format so we�re going to lose some of that interactivity
and I should also note that we moving this tool as we speak into DataCation so that it
reaches a much wider audience.
(slide 53) If you were to go onto our website, you would see this tool and you would have
the opportunity to click on �about the tool,� which provides documentation about what it
is. You could click on �behind the metrics�: this gives a nice breakdown of what those
metrics are that underpin this tool.
You can �explore the data�: this is real data that schools and others can access so
that you can really go in and you can see how from one semester to the next school and
student performance has changed over time. And then �apply the tool.�
Here what we�re trying to do is to offer ideas on how to use this tool in practice.
(slide 54) If we were to click on �about the tool� one of the first things that we
see is how this tool implements our college readiness metric. Here you can clearly see
how the different colors represent student performance categories.
You might note that we have expanded the categories from four to six. What we found was that yellow,
that almost on-track, was pretty wide so we wanted to break it down a little bit more
and make the tool a little more sensitive.
(slide 55) You can click around in the tool and in this example you can see how the width
of the color band represents percentages of students within a performance category. In
this really nice example of 33.7, you can see how those kids are really draining and
moving into that red off-track category.
Likewise you can see that there is a certain percentage of kids up in the green band that
are flowing out and moving into a lower performance category. At the same time we also see a substantial
number of green kids who stay green.
(slide 56) Here we see a little bit of information on our metric, you can literally see how the
metric is calculating and how it links to a certain moment in time in a stock flow map,
but I should also say this is actually a pain point for us that we�re still debating internally.
The changes in our graduation requirements here in New York City are the result of phasing
out what we call the Local Diploma. This is not a rigorous diploma and it certainly needed
to be phased out and this phase out has happened over the last four or five years, but this
also has really important implications for our metric and how to fairly present longitudinal
data.
For instance, do we use today�s metric for students that graduated back in 2008, even
though those kids who graduated back then had different graduation requirements?
There are pros and cons to both and for standardization purposes and being able to really look across
multiple years of data it�s awfully helpful to have the same metric across, but at the
same time it's not an entirely fair comparison. You can imagine it�s endless hours of fun
for us here at New Visions debating this.
(slide 57) Here�s a screen shot of an example of how you can apply the tool, we have a few
different scenarios up on our website, but in this particular screen shot we�re asking
school leaders to think about the following: when you look at this map where are you seeing
large flows of students draining from a higher performance category into a lower performance
category?
What interventions or policies at a school level can potentially stop that draining?
We can see that this draining happens at different moments in time, reinforcing the point I made
earlier that all kids are vulnerable, but in different ways and at different moments
in time.
Then the next logical question is why are so many kids draining? This brings us to the
work with DataCation that we're currently engaged with and moving this tool over to
their platform where the power of functionality and really being able to click and drill down
and have student's pop-up, that's the next level of iteration that this tool will be
undergoing soon.
I want to go ahead and end my presentation here and open it up for any questions.
Rachael Berger: (slide 58) Thank you so much Susan for such
an in-depth presentation. Again we�ve got a number of questions and we�re not going
to be able to get to all of them. One question was: How do you make sure schools with limited
understanding of data know how to use the stock and flow chart?
Susan Fairchild That�s a good question. That defines all
of the data work that we do here at New Visions and we have a lot of different mechanisms
for working with educators.
The first thing that we have is a data specialist network; I think that it�s the only one
of its kind here in New York City.
Many years ago the Department of Education mandated that all schools have a data specialist,
so one of the things we did was we tried to convene monthly sessions with these folks,
seeing them as folks who can turnkey in the schools and we train them and work with them
eight or nine times throughout the year; we�re very available to them.
That�s one mechanism, the other is we have convenings where we begin to have the conversation
and I also want to keep in mind with something like stock and flow we have a lot of documentation
and we�re going to be able to track when we roll it out in DataCation the extent to
which it is used and how people are able to make meaning from it and we�ve begun to
pilot it with a few of our principals who have given us really good feedback.
Rachael Berger: Thank you. A second question was: What role
is there for postsecondary institutions in supporting this work? And I'm hoping that
Jenny you might also provide a quick response to this as well.
Susan Fairchild: One of the things that we hope to do with
the stock and flow tool is to extend it into postsecondary. I think the most important
thing, for instance, is that we have to remember is shifting the burden happens both ways.
High schools can shift the burden over to postsecondary at the same time elementary,
middle schools shift the burden over to high schools; I see high schools as the last line
of defense. If we�re not able to really help these kids at this moment in time, then
their opportunities in postsecondary really become much more limited, their opportunities
have narrowed significantly if we can�t reverse some of the performance gaps in high
school.
But I also want to say that once it gets to high school what was once a chronic problem
starting all the way from kindergarten, it is now an acute problem. I see that this problem
of kids being off-track represents across our entire district a systematic problem and
it�s not something that an individual school can necessarily change for the entire system.
I think that in terms of what postsecondary schools can do it�s being very aware of
the good hard work that is happening in our high schools right now and continuing to have
the conversation about what are the appropriate indicators and metrics that those of us in
high schools need to be looking at and here I think Jenny can certainly add to that.
Jenny Nagaoka: It's a really interesting area because I�ve
talked a lot about how GPA is more predictive than test scores for who is going to be successful
in college and who is going to get a college degree, but there�s still a lot that we
just don�t know about why it is that two kids who seem to be the same coming out of
high school have really different outcomes when they hit college campuses.
This may be the researcher in me, but I feel like this is something we really need to be
doing a lot more work on, but I think it�s also a place where postsecondary institutions
also need to be thinking about why is it that some kids are successful and why some kids
are not when it�s not always something that can be identified looking at what their college
applications look like. There certainly are risk factors that we know about: when students
are part-time, if they�re working outside of school, there�s any number of ways to
identify risk factors, but what I think we actually want would be something that would
roughly be the equivalent of an on-track to college graduation.
I think there is a lot of work that could be done on the research side and the practice
side to start to really know what it is that puts students at risk when they hit college
campuses of not actually graduating and developing data systems and having colleges go through
the same sort of process that's going on in the New Visions schools and in Chicago where
they are actually thinking about students in a systematic way and really trying to help
them be successful when they are there.
Rachael Berger: (slide 59) Thank you both so much. I do want
to share with our participants that these are additional resources that you can use
to find out more about our presenters� work. And again you�ll be receiving this PowerPoint,
as well as a survey that we would love you to respond to.
(slide 60) If you need further information on REL Northwest and our alliance please feel
free to contact Jacqueline or I. Thank you again to our presenters for a wonderful conversation,
we look forward to continuing this dialogue in the future.
1