Tip:
Highlight text to annotate it
X
District-Determined Measures and Assessment Literacy Webinar 1
Ron: Good afternoon everyone. Thank you for joining us for todayís webinar. This is the
first in an eight part series designed to support districts in identifying and executing
their approach to implementing district determined measures. My name is Ron Noble. I am joined
today by my colleague Samantha Warburton. We are the leads of the educator evaluation
project for the Department of Elementary and Secondary Education. Weíre really thrilled
to be able to join you today to introduce the series and tell you about things to come,
give you a general overview of the District-Determined Measures initiative and set you up for the
rest of the series. Thanks again for being with us today. This is the second running
of this first part of the webinar series, so hopefully you all are new to the series
and are here for the first time with us, just going to go over a couple of logistics before
we dive in. The first thing I want to draw your attention to the lower right hand corner
of the screen where you should see a chat box, that is the best place to send us questions
that youíd like us to answer at the end of the session today. We will reserve some time
at the end for us to answer some of the questions that have come in during the course of the
session. So do use that chat box. Youíll see a dropdown menu next to the phrase send
to, so please just select all panelists, that way we make sure that our team can see your
question and can get ready to respond. In addition we want to make you aware that all
of the webinars will be recorded and archived on our website, so if you look at the webinar
dates and youíre not sure youíre going to be able to make all the scheduled sessions,
you will definitely have an opportunity, you and your teams, to listen to the recordings
as a group and I think in some districts that will be the preferred way to participate in
this series so everybody can convene during regular meeting times and take advantage of
the sessions. In addition, all the materials that you need to participate in this webinar
and subsequent sessions will always be available on the registration page as well as the page
that you go to launch the session on the day of, so you should have seen it as you came
in, you should have seen todayís PowerPoint slides as well as an Assessment Literacy self
assessment and gap analysis document and a two pager with the topics and registration
links for the remaining sessions, which weíll get into a little bit later. So those are
the three pieces of supplemental materials that are available for todayís session. They
will also be posted on our website as well, which weíll show you where those are archived.
The current session weíre in now is posted there for you, but Iíll navigate to the page
and give you a firsthand look at what that looks like in a minute. Outcomes for todayís
session, Samís going to take you through those now.
Samantha: Okay, we have a couple different goals for today, first we want to make sure
that you have a good understanding of what the different pieces of this webinar series
will be and what the schedule is for that and who you should be bringing to the table
to take advantage of the series. Second we want to ensure that you have a clear understanding
of the policy requirements and also what our expectations are regarding implementation
of District-Determined Measures over the coming months. Weíre going to share information
with you about the supports that weíre providing and lastly weíre going to introduce you to
the tool that Ron mentioned at the beginning, the self assessment for assessment literacy
which is going to really help you understand how to best take advantage of this webinar
series.
Ron: As I said this is the first in an eight part series, so what you have here is a schedule
and topics for the remaining sessions. I also want to make you aware, youíll see the gray
bands on the screen here for technical assistance and networking sessions. So there will be
more information to come about these sessions, but these are going to be opportunities for
district teams to come together and share some ideas and contribute to each otherís
plans for the implementation of DDM. I want to navigate now to our web page for just a
minute so you can get a sense of where youíll find information about the subsequent sessions,
so just bear with me a minute as we navigate there now. Here you see the educator evaluation
home page on the Department of Elementary and Secondary Educationís website. On the
left hand side is our navigation pane. You will see there is a specific button allocated
to District-Determined Measures. If you click that itís going to give you some overview
information. Underneath that you will see a specific link for the assessment literacy
webinar series, and this will be the one stop shop for all things related to this series.
As you can see weíve got the materials for todayís session posted already and a few
days before each of the subsequent sessions weíll get the materials up there for those
sessions as well. Once the sessions have been archived and the recordings are ready for
public consumption, the recording links will be placed on this page as well. So thatís
where youíll go to watch the archived recordings for any of the sessions that weíve done,
that you werenít able to join live. I also want to draw your attention to one of the
documents that we made available at the beginning as you logged in. This is essentially what
the web page will look like once we have the registration links up there for your use,
but you have it now and you can actually register for all of the sessions, all the remaining
sessions, using the links here under each session. Thereís a bit of a description there.
The idea is that participants will go through the series sequentially, so each session will
build on the last and there will be specific work products that youíll be working on that
will be useful as you enter the next session. For those of you who need special accommodations,
there are separate web links at the bottom of the page for each of the eight sessions,
so you would use those in the event that you would require special accommodations to participate
in the webinar series. Everybody else should use the Web Ex links that you see under each
of the sessions as you go through. Weíre going to transition back now to the PowerPoint.
This series is really targeted for district teams that will be engaged in the process
of identifying and selecting District-Determined Measures. So in some districts this team is going to be comprised mainly
of district level staff, while other teams we suspect this team will involve school level
educators as well. For example in some districts department heads might be tapped, they might
be the right people to bring to the table to assist in the identification and selection
of DDMs. This is not a series designed for educators who will actually be implementing
the measures in their classroom, but more for the folks who are going to be involved
in the selection process. Each part is going to however include some key messages for district
teams to take back to their educators so that everybody is on the same page in terms of
expectations and timelines. District leaders are really encouraged to do some thinking
about who the right people are to involve in this work before the next webinar session,
which incidentally is happening next Thursday. All is not lost if you havenít organized
your team by next Thursday, because as we mentioned youíll be able to take advantage
of the archived recordings, so if youíre not quite that far along you do have some
time to do some thinking and get the team together in order to be able to participate
in the series in the way that itís intended. Essentially the purpose of todayís session
is to build some knowledge about the evaluation framework and to support your understanding
in how the series is going to roll out.
Samantha: There are two key terms that weíre going to be discussing today. Most of our
time is going to be around District-Determined Measures, what thereís a lot of buzz about,
but we also want to make sure that we provided clarity around the idea of the Student Impact
Rating. Itís critical to understand the differences between these two terms and also the intersection
of the Impact Rating and District-Determined Measures. So to ensure that we all have a
shared understanding, the Student Impact Rating is the rating itself with three levels of
low, moderate or high, whereas District-Determined Measures are the actual measures of student
learning, growth and achievement that informs the impact rating. Weíre going to dive into
those a little bit more in the next hour. As you can see the first thing that weíre
going to talk about is the actual Impact Rating, provide some clarity around that, and then
weíre going to dig into District-Determined Measures. After that weíll talk more about
the implementation roll out, recommended steps for districts, information about our supports
and then introduce to you the tool that you can use to take back to your district and
conduct this activity around the assessment literacy self assessment. The key thing to
note about the two ratings that are produced through the evaluation process is that they
are two completely independent ratings. This is actually unique to Massachusetts. I think
thereís only one other state in the country thatís doing it this way. In many other states
the evaluation results in one rating which has in part performance thatís based on observations
and the use of rubrics, and also adds a student growth measure into it so that theyíre combined
to create one rating. In our framework there are two. The first type of rating is a Summative
Performance Rating. There are four different levels, exemplary, proficient, needs improvement,
unsatisfactory and this has really been the focus of work thus far in all our level four
schools and our Race to the Top districts this past year. The next piece of it thatís
coming down the road is the Student Impact Rating, resulting in a rating of high, moderate
and low. This is where the work is beginning with regard to identifying and developing
District-Determined Measures, but actually having a Student Impact Rating is still a
few years out for all districts across the Commonwealth. Today what weíre really going
to focus our discussion on is the steps that districts should take to position themselves
to be able to determine impact ratings for some educators by the end of 2014-15 and moving
forward into the following years and the supports that ESE is going to provide to assist with
that work. Many of you have probably seen this graph. What this is showing is the two
separate ratings. On the left in orange you see a vertical bar with a summative rating
that has each of the four levels. Those performance levels determine the type of plan that youíre
going to be on. For example, a proficient or exemplary performance rating results in
a self directed growth plan. Now on the bottom you see a blue horizontal bar thatís the
rating of impact on student learning. What we want to make sure is clear to you today
is that these ratings always remain independent but that they do intersect. The critical intersection
here is only for educators that are rated proficient or exemplary. On this slide I want
to point out to you if somebody is needs improvement or unsatisfactory, regardless of their rating
of impact, theyíre on the same educator plan. An educator thatís rated as needs improvement
will be on a districted growth plan no matter what. Now weíre going to hone in and show
you some examples of exactly how these two types of ratings intersect.
Ron: To illustrate Samanthaís earlier point, suppose Iím an eighth grade math teacher
and Iíve completed the five step evaluation cycle that Samantha mentioned results in summative
performance rating. In my case my evaluator determines that my summative performance rating
is proficient for this evaluation cycle. Suppose also that my student impact rating is determined
to be moderate, the result of that rating combination as you can see by the arrows,
we go across from proficient, we go up from moderate, the result is a two year self directed
growth plan. When taken together, the summative performance rating and the student impact
rating, really provide an educator and his/her evaluator with a level of information about
the educatorís performance that is seldom available under most current systems. The
beauty of the way that the system is designed in having these two ratings be independent,
it gives you two distinct, but related pictures of educator practice that when taken together
will result in a plan with a set length that is geared really to help the educator develop
along a trajectory that really meets his/her needs. Even in the event of a discrepancy
between the two ratings, there are opportunities for growth on the parts of both the educator
and the evaluator. So what this slide shows is one such discrepancy. For example, you
have an educator here that has received a summative performance rating of exemplary,
so you wouldnít expect that educator might receive a student impact rating of low, thatís
what we refer to as a discrepancy in the two ratings. What that does is itís a signal
to further analyze this discrepancy in practice and student performance and to really try
to dig deep into understanding what are the underlying causes of this discrepancy. The
educator then in this case would be placed on a one year self directed growth plan and
would collaborate with his/her educator to devise some strategies that will help move
that student impact rating along that horizontal axis towards moderate or high where you might
expect an exemplary teacherís rating to live.
Samantha: One example of how this type of situation might occur comes from the special
education world. I frequently hear teachers say that they are a teacher of older students
and their MCAS student growth percentiles for those students are often low. However,
that teacher may truly be a proficient or exemplary teacher and they have concerns about
what does that mean and what are the implications for me if my students are always showing low
growth. So this is really a key message to take back to educators in your district and
in your school. This is something that can be a very realistic and reasonable situation.
In the event that an educator is rated as proficient or exemplary, but theyíre showing
low growth, the consequence for that educator is that theyíre placed on a one year plan
instead of a two year plan. It doesnít have specific consequences for their job security,
what it means is that it focuses the attention on that discrepancy and for the next year
as they think about moving forward in their evaluation cycle, it requires that the plan
itself be focused on that discrepancy. Now this can also be an indicator that perhaps
the evaluator needs more support in knowing how to appropriately use evaluation tools
such as rubrics, in order to assign a summative performance rating to that educator, so the
other side of this, separate from the implications for the educator is there are also implications
for the evaluator. In the event of discrepancies, the evaluatorís own supervisor would also
note that on their evaluation. For example, if a principal has come up with a rating of
exemplary, but the educatorís impact rating is low, then itís the principal who is being
supervised by their superintendent, the superintendent would also note that on the principalís evaluation,
and that would then become a topic of discussion so that it was understood whether this was
a onetime incident that was particular to the context of the educator or if it might
be a pattern that might indicate that evaluator needed more support, the principal needed
a little bit more professional development about how to more effectively evaluate the
educator for whom they have responsibility.
Ron: Itís a really important point to stress how the idea of local context really is going
to impact the way that educators and evaluators understand the results of this matrix here.
We get a lot of questions about discrepancies in particular and also in general about whether
someoneís impact rating can actually take them down a notch on their summative performance
rating, or vice versa. So the important point to stress here is as you message this back
to your constituents is that these ratings always remain independent. You can learn,
you can dig deep and learn about the intersection of the two ratings and those underlying causes
or correlations, but one rating does not impact the other.
Samantha: This is showing a different type of discrepancy. Weíve gone from the upper
left hand corner down to the bottom right hand corner, and this is another one where
weíd be surprised to see an educator who had an unsatisfactory performance rating,
but a rating of high on their impact rating on student learning. So in this instance there
are not specific consequences that are laid out through the regulations and through the
framework the way that there are for the other type of discrepancy that was on the previous
line, regardless this educator would be on the improvement plan, but itís so important
to think about this other type of discrepancy. An example of this might be again from the
special education world, what I know well, and one instance that weíve discussed in
that community is, what if an educator is teaching in a classroom that has all students
with Autism and that is a population that the principal doesnít have a lot of knowledge
of. It may be that these students are actually showing high growth, but when a principal
walked in they didnít see practices they necessarily expected to. T his would be a
great indication that thatís an area for some learning for that principal and for that
evaluator. So again, this is a place where the principalís supervisor would note that
on their evaluation. Another thing to think about here is that there is opportunity for
variation in the length of the improvement plan and thatís at the discretion of the
evaluator. In this instance it may be that if you had a teacher that was rated unsatisfactory
that had a high impact, that you would want to have the longest plan possible, that maybe
this would be a full school year. On the other hand if you had a teacher that was rated as
unsatisfactory and had a low impact, then you might want to use that information and
have a shorter improvement plan to ensure that the timelines for improvement were shortened
and that there was really a high concentration of support. For example somebody that was
unsatisfactory and low might be on a ninety day improvement plan, whereas somebody that
was unsatisfactory and high might be on a full school year length plan.
Ron: Letís unpack the student impact rating a little bit since the focus of our talk today
is on District-Determined Measures. The rating requires student data from at least two measures
across at least two years. The two years of required data across multiple measures reflect
the fact that the student impact rating must be based on trends and patterns in student
learning. So letís think about what we mean by trends and patterns. Trends are results
based on at least two years of data from a single measure and patterns are defined as
consistent results from multiple measures. These data can come from a variety of sources.
Statewide growth measures such as the MCAS SGP must be taken into account when applicable,
but the reality as many of you are aware is that only roughly 20% of our educators teachers
in grades and subjects where MCAS SGP is available. So while MCAS does give us a solid base for
this work, there is room for districts to step in and have to identify additional measures
to support the determination of impact ratings for all of their educators and those are the
measures that we call District-Determined Measures. Just one quick note on the state
measures, the Massachusetts English proficiency assessment, which you know as MEPA, has been
replaced as of the 2012-2013 school year by the ACCESS for ELLs test, and there will be
more information forthcoming about how ACCESS may be incorporated into a student impact
rating. What I want to draw your attention to is the table on this slide. This is just
one example of the measures that could play into a fifth grade teacherís student impact
rating. You see in year one the MCAS SGP for grade five math and a unit assessment on multiplication
and division of fractions, which again would be a locally determined measure, a District-Determined
Measure, thatís used in all the fifth grade classes would be this educatorís measures
for year one. That sets up the opportunity to look across measures and thatís what we
refer to again as patterns. In year two those same two measures would be implemented for
this educator. That would allow you to establish what weíre calling trends, which is data
over time. There are some districts that are looking at a longer trend than two years and
looking at a three year trend, which is okay under the regulations, so at least two years.
Samantha: One thing I also want to point out, one of our additional goals today is ensuring
you know what other resources are. On this slide you see in the lower left hand corner,
a small gray box that says part seven, rating educator impact on student learning, etcetera.
This information and this graphic about trends and patterns is taken from part seven, so
first of all this is a reminder that is the guidance that was released last summer that
provides a great starting point for this work in your district and whenever you see that
small gray box thatís referring you to another resource that we have, itís going to be something
thatís posted on our website and the hyperlink is contained in that gray box. So if you have
the slides pulled up on your computer, you can navigate directly to those resources.
There are a couple of different resources that we have in those slides and some have
specific page numbers etcetera so you know other resources to go to for additional information.
Ron: Weíre going to now hone in specifically on district determined measures which again
are going to be for the majority of our educators the measures that the district defines to
contribute to the determination of the student impact rating for those educators. The definition
that comes out of the regulations, which you can see here, is really focused on identifying
measures of student growth that accurately reflect what teachers are teaching in a given
grade or subject. So the definition is necessarily flexible to allow districts to really creatively
approach the process of identifying appropriate measures. Weíre not talking about only pre
and post tests for every grade and subject. There is flexibility there to use additional
types of measures, such as portfolios and capstone projects, a lot more content as we
go through the series will be dedicated towards really exploring the different types of measures
that lend themselves for use as a District-Determined Measure. Those are some of the questions that
we hope you have and that we hope as we go through the series weíll be able to answer
for you. Ideally the concept of identifying District-Determined Measures is really a way
to formalize or build upon things that educators already do on a daily basis, which is to figure
out really consistent and reliable ways to assess student learning growth and capitalizing
them and scaling them up across the district. If we just distill that regulatory definition
a little bit we can arrive at these three key points. First here, the main question
that we want to be able to answer with this initiative is how much has this student improved
relative to where he/she started from. Achievement really can account for prior learning conditions
and experiences, so itís really critical that direct measures of student growth are
used whenever they can be identified. The second bullet here really addresses the fact
that consistency across schools in the district is ultimately going to permit some really
great comparison that will really help evaluators and educators understand where these educators
fit in the scheme of the district. The final bullet there is just to highlight obviously
we want to assess the learning as directly as possible, but there may be some rule, so
for some educators it may be appropriate to use what weíve called in our guidance, indirect
measures of student learning along with other direct measures. Just as an example, a district
may decide to use something like the graduation rate as a measure for guidance counselors,
so that might be a suitable indirect measure of student growth that could be used as one
of the measures that contributes to a guidance counselorís student impact rating. We do
recommend that at least one measure thatís used to determine an educatorís impact rating
be a direct measure of student learning, but we are aware of the reality that for some
educators an indirect measure can be useful and appropriate.
Samantha: Pretty soon weíre going to get into the implementation roll out and time
table, but I just want to pause here for a moment to underscore the importance of this
work and remember why it is that weíre doing it. The five priorities that weíve outlined
here were five of the priorities that were identified by the taskforce that came together
to create this framework that represented stakeholders across a wide variety of constituents
in Massachusetts. The key one, placing student learning at the center is really integral
to thinking about the idea of having assessment data for students in call grades and subjects.
For a while now weíve had a level of access to data and information about students through
the use of the MCAS that educators in other grades and subjects simply havenít had access
to. So it hasnít been possible to place student learning at the center of conversations in
this same way for all students and therefore for all educators. So the opportunity to have
assessments for all of our grades and subjects allows us to really make sure that the idea
of whether or not students are growing and where theyíre learning and how well theyíre
doing from one year to the next is an opportunity that weíre going to have a much better chance
of really exploring when we have assessments that cover all of our grades and subjects.
That can then become part of the conversation about the effectiveness of educators. The
second big value here is promoting growth and development. The idea here is that really
one of the key things about evaluation is that itís intended to be used for professional
growth, itís intended to help make our educators better, and in order to do that strategically
and effectively, having useful data is a must. Itís critical that we have ongoing feedback
about the performance of students in order to understand the effectiveness of educators.
The third point is that the educator evaluation framework is set up to also begin to recognize
excellence. Thatís something weíve been able to approach and begin by observations
and the analysis of artifacts, [Inaudible] on information about student data also helps
us to really understand who the most effective educators are in our districts. So if you
picture that matrix that we showed before, this would be the upper right hand corner
where you have an educator who is rated exemplary, but also has a high impact on students. Chances
are that educator is who you want to really be helping be that teacher leader, a mentor,
and providing on the job professional development and coaching to other educators in your school.
Lastly, Iím going to couple the last two bullets that we have here, one is thinking
about the impact of the evaluation framework on achieving tenure and the last is shortening
timelines for improvement. So really the last two bullets are about new educators and struggling
educators. The evaluation framework is really critical for those two groups of educators.
The fact is that having better data to understand the effectiveness of those particular two
groups is crucial to being able to ensure that the right supports are being provided
and that weíre really honing in on the areas where those educators need to be focusing.
Now weíre going to move forward to talking about the implementation roll out and what
it means and what itís going to look like to get this started. What weíre highlighting
here is the expectations for what is being reported to ESE. The first thing I want to
say before getting into the details is just to recognize that there is no district and
no state in the country that has figured this out yet. We donít expect you to have this
figured out yet either. Nobody has assessments for every grade and for every subject, let
alone multiple measures for each of them. So this is how weíre going to get started.
This September what weíre asking of districts is first to report in three basic buckets
where theyíre going to be implementing or piloting or still identifying measures. So
one, weíre going to ask you to tell us the grades and subjects where youíre going to
be implementing District-Determined Measures. We do expect that all districts are going
to have something there. First for every school and district thereís going to be impact for
at least some of those grades and subjects. In conversations with districts theyíve also
begun to identify for us their early thinking where they think they might be able to use
or implement assessments theyíve already been using. For example some districts say
theyíre thinking about the use of DRA for K5 to assess reading growth. Some talk about
using math for math in ELA. Some talk about DIBELS for elementary reading. Many others
have common assessments. So thereís a lot of districts out there that are already thinking
here are some things that weíve been using that we think will fall into this bucket of
implementing DDMs next year. Bucket two is where you think youíre going to be piloting
DDM. This might be an area where assessments exist, but you havenít been using those assessments
yet, or they might have been used in one school but they havenít been used district wide.
It may also be that thereís something like a science fair project thatís been going
on in the district for a couple of years and the new measure thatís being implementing
is something like a scoring rubric, so that science fair projects are consistently being
scored across all teachers or even across all schools. For those first two buckets weíll
probably be asking you to tell us actually what the measure is for those grades and subjects.
Bucket three, thatís the place where we know that thereís simply grades and subjects that
donít necessarily have assessments yet or that there is still more work to be done to
identify what is going to be used there. For that one what weíll need is for you to tell
us what are the grades and subjects where youíre still working on identifying whatís
going to be used, where youíre still developing something, or there is more research thatís
being done. We are going to provide a tool for you so that you can report this information
out. What we want to be able to provide is something thatís going to help you organize
it, that you can easily turn back to us to show us what the areas are that you still
need to focus on. So, stay tuned for more information about what that reporting process
will look like and what the tool will be, before we give it to you. That template is
not available just yet, but will be soon. In addition you see under the dotted line,
weíre also going to ask you to spend a little time thinking about what your plan is going
to be to move this forward, so thereís a strategic plan over the next couple years
so that you can ensure that your district is ready to be able to start recording ratings
for at least some of your educators by the end of the 2014-15 school year and ideally
for all educators by the end of the 2015-16 school year.
Ron: The work right now is really for districts to begin to take stock of their existing portfolio
of assessments and to start to make some decisions about which grades and subjects the district
will be assigning to each of the three buckets as Samantha reviewed a moment ago. So throughout
this process the district teams are going to be working to identify appropriate measures,
we just want to draw your attention to the fact that the regulations do ultimately charge
the superintendent with the final decision making authority about which measures will
be used as District-Determined Measures in the given district. Weíre now going to move
onto talk a little bit about the recommended next steps for districts. The work of identifying,
selecting, and implementing DDM is really going to require significant advance planning
on the part of districts. What weíve provided here are just a few of the recommended next
steps that can serve as a starting point for districts that are beginning to engage in
this work. As Samantha mentioned earlier, in the lower left hand corner, the gray boxes
are links to additional resources. In this case here itís the link to our quick reference
guide on District-Determined Measures, which is also posted to the District-Determined
Measures web page that I showed you earlier. There are far more recommended next steps
included in the QRG than what weíll be able to talk about during todayís session, but
I just want to draw your attention to a couple of really important steps to think about.
The first is, one of the most important lessons that weíve learned from our Race to the Top
districts has been the critical value of engaging a team to drive and support the implementation
process. The districts that were able to really reach collective bargaining agreements earlier
in the year, convene their teams, and really get people thinking and excited about the
benefits of the framework have been able to implement in year one with greater fidelity
than those that are still struggling now to finalize their collective bargaining agreements
and get implementation off the ground. We suspect that the same will be true as you
work towards identifying and getting ready to implement district determined measures.
So the earlier that you can engage a team of diverse stakeholders in the process, the
better off the district will be as itís time to engender support and role this out beginning
next school year.
Samantha: A next key step is to actually create essentially a mapping of where assessments
are being used either district wide or in individual schools, so you can begin to determine
where you have assessments for grades and subjects. Another key part of that is thinking
about the coverage that you have of educators in those assessments. So weíve done some
work at the state level to think about the numbers of teachers and educators that are
directly connected to specific grades and subjects, but thatís going to vary at the
district level, so it could be helpful to know if we have assessments in this area,
is that going to cover 10% of our teachers or 2% of our teachers, so that can help you
to really think strategically as youíre identifying where those assessments exist, particularly
when you start thinking about your gaps, itís critical to think about which of those gaps
are going to be a high priority for you.
Ron: The last one here is just to stress that thereís no need to reinvent the wheel or
to complete this work in a vacuum. We really encourage districts to proactively reach out
to potential partners and engage them in this work as well. For example, we recently had
the pleasure of meeting with a group of curriculum leaders that meet on a regular basis, twenty
or so neighboring districts that get together and work on all things related to curriculum
and this particular group has dedicated their next couple of years to really think through
the District-Determined Measures implementation process and to help each other identify measures.
That type of network is really going to be invaluable to districts, especially smaller
districts that may not have the central level infrastructure to dedicate significant resources
to this process, to really benefit from the learning and work that their colleagues are
doing in other districts, so identifying potential partners or building regional networks is
really something that weíre interested in helping districts with and weíre really open
to any ideas. Weíve been out in the field quite a bit recently meeting with various
groups, which has been really beneficial to us to get an understanding of how districts
are thinking about pulling this work off. Weíre now going to move into talk a little
bit about the supports that districts can depend on us for. The first couple of supports
are geared to by July 2013 produce exemplar District-Determined Measures that districts
can consider for use as early as the next school year. That process really has two major
components. The first as you see here is the identification of anchor standards for a subset
of grades and subjects, so this is really going to help guide our work as we think about
the fact that there are plenty of subjects in Massachusetts for which there are not grade
level standards. When weíre releasing exemplars weíre saying this is a great assessment for
eighth grade math, we benefit from being able to point to grade specific standards, but
we canít do that for all subjects, so by identifying these anchor standards which will
be a subset of the standards identified in the curriculum framework, especially for the
subjects where there are grade spans as opposed to grade specific standards, weíll be able
to really help districts understand what is the content that these assessments are geared
to measure, so thatís going to be the cue for you and maybe in your district, what weíre
calling a sixth grade arts assessment, makes more sense for you to use with your seventh
grade art instructors, and there will be flexibility there, but at least youíll have a clear sense
from us about what the standards are that are being measured by these given assessments.
Coupled with that is the collection and evaluation of quality assessments from Massachusetts
districts. Weíre in the process of bringing on some support in the form of curriculum
and assessment experts that will be reaching out to all of you to collect your best examples
of assessments that you use. These could be a combination of commercial assessments and
assessments that your teams have developed in house and this is where we really want
to call the finest examples of assessment practices that are in place in the Commonwealth
and make them available to all districts. You can see here thereís a link to a survey
that weíre really hoping that youíll take advantage of, and this survey is an opportunity
for you to tell us about up to five of your highest quality assessments and weíll use
that as leads as we send our experts out to actually make the collections and vet them
through a process that weíll be developing, so please take advantage of this survey. Itís
got two purposes, one, just what I said to collect information about your highest quality
assessments, the other thereís an opportunity for you to tell us the grades and subjects
for which you think youíll need the most help from us in identifying exemplar DDMs
or appropriate DDMs, so both of those pieces of information are really going to be crucial
to us as we continue to strategize the way that we can support districts.
Samantha: I just want to underscore that, because itís really critical that weíre
able to be responsive to your needs. What we want produced is exemplars that you can
be piloting next year. So in order to do that in a way thatís going to be useful and valuable
to you, we need to have this information about what your needs are. So participating in this
survey is critical and even if you donít necessarily have assessments that you feel
like you can share with us, please take a look at the survey, hop on anyway so you could
at least provide that feedback about the areas of your greatest needs. I also just want to
underscore and make sure that one thing was really clear in what Ron said about the anchor
standards, which is just a reminder that these are not new standards, theyíre not additional
standards, itís a prioritized subset of the existing standards that are basically telling
us, of all the things that weíre asking our students to learn, what are the things that
weíre going to be measuring.
Ron: In addition to the exemplar DDMs we also have a couple of guidance documents that weíre
working on now to release to you, the first of which is called Technical Guide A, which
is a supplement to the part seven guidance that Samantha mentioned earlier. Tech Guide
A is really going to be focused on measuring growth and selecting appropriate measures,
so this is going to be really complimentary to the remaining sessions of this webinar
series which weíll talk about in a moment, really hone in on these same ideas of figuring
out the best way to identify appropriate measures matched to educator roles and then take those
measures, implement them, and ultimately derive student impact ratings which segues to Tech
Guide B, which is really going to be focused on determining student impact ratings of high,
moderate or low. We had a question come in a bit ago on whether we will be able to provide
guidance on roster verification. Itís that Technical Guide B, which is slated for publication
in August, where that type of guidance will be most appropriate as it pertains to how
you then take the data from the assessments that you implement and derive an impact rating,
thatís where the concepts of teacher attribution and roster verification really come into play.
I want to segue back a moment to the webinar web page, just so we can preview a couple
of the upcoming sessions that weíre really excited about.
Samantha: I also want to say thank you to everybody thatís sending questions in. We
are going to spend a couple minutes answering those questions at the end, so please feel
free to continue submitting questions as weíre moving through this.
Ron: I just want to highlight the next two upcoming sessions. This is where the work
will really transition from more of an overview which is what weíre giving you today, to
more substantive work around building your knowledge of assessment and good assessment
practices. So session two which is slated for Thursday, April 4th from 4:00 to 5:30
is all about the basics of assessment. This is where youíre going to really explore the
key concepts of assessment and how they can be used to measure student growth. Weíre
also given here about the different types of assessment items that could be used in
this process and youíll learn some basics about some key technical qualities such as
alignment, reliability and validity. We want to draw your attention to this session in
particular because as you may have noticed most of the sessions are sixty minutes in
length, but this one you will need to budget a little bit of additional time. We have ninety
minutes slotted for the session on Thursday, April 4th, so it will run from 4:00 to 5:30,
it will be a ninety minute session. Part three will transition into assessment options and
this is where you look beyond the different item types and really look at the different
types of assessments as a whole that will be potentially used as District-Determined
Measures. Spliced in as we saw earlier will be the technical assistance sessions which
again will be your opportunity for some face to face support from some experts from ESE
as well as some of our partners, to really help you work with your colleagues to dig
into the work of these webinar series sessions, because each of them will have some discreet
deliverables and work products that youíll be working on. The technical assistance sessions
will be that opportunity to come together and further refine the things that youíve
been working on. Iíll transition now back to the PowerPoint. There at the bottom o f
the screen is the link to the DDM webinar series web page which I highlighted earlier,
so that is clickable in your slides that you were able to download before the session and
that is where you can go for all of your webinar series needs, from the materials to ultimately
the archived recordings. In addition to the supports weíve laid out there are some additional
ESE initiatives that are currently running that we wanted to draw your attention to.
Some of the districts on the line today may be participating in some of these initiatives,
so we want to make sure that youíre clear on how they might dovetail with the DDM initiative.
The first of them is Edwin Teaching and Learning. As many of you probably know weíre currently
piloting a new platform called Edwin and one of the components of Edwin is this teaching
and learning system. This is a new tool that can be used to develop assessments the districts
could use as District-Determined Measures. As you can see in the third bullet here, the
functionality of this system will include the ability to create and administer assessments
using released MCAS items and some third party items and items the districts can actually
build and enter into the system themselves. So this will be a great resource for the districts
that are involved in this pilot and then ultimately as the system rolls out to a broader array
of districts to more folks, so thatís one initiative that we wanted to make sure you
were aware of.
Samantha: There was a question earlier that I want to quickly address at this point, which
is about ways to share District-Determined Measures across districts and that person
actually mentioned Edwin as one possibility. Weíre very, very interested in supporting
districts to share District-Determined Measures both regionally and statewide. Edwin may be
one platform to do that, but weíre not going to rely on it exclusively knowing that right
now itís being rolled out in different phases to districts. We donít want to rely exclusively
on something that not everybody can participate in, but the idea of sharing DDMs and making
sure that there are times to bring people together and give people access to exemplars
and sample site districts is something weíre committed to. There was a very related question
about sharing assessments using item banks from other vendors and services. I think thatís
something weíre going to need to look into exploring, such as NWEA, thatís something
weíre going to need to explore further as to how we can take advantage of the work the
district has done and share that out.
Ron: Another initiative we want to highlight for you is one that you may have heard by
another name. It recently had a name change, it was formerly known as educator training
and data use or ETDU, but has recently been renamed Data In Action. This is a professional
development series that is in the works now that will be piloted throughout next year.
One of the courses that theyíll be offering is a course titled assessment literacy. This
course is really going to be geared to support district work in identifying and implementing
DDM. This is going to be a much more intensive opportunity than this webinar series for example,
in that the course on assessment literacy under the Data In Action program will be fifteen
hours of intense work around assessment literacy. There will be a pilot for this course in the
fall and there is actually still an opportunity for districts to sign on if you are interested.
You will have to check with your district to see if data use is one of the projects
that has been identified in your Race to the Top plan in order to be eligible to participate
in the pilot, so do check with your district. If you are interested in learning more information
about the Data In Action program, do feel free to send me an email and I will certainly
pass it along to the Data In Action team so they can follow up with your directly. Ultimately
the idea is to make this system, this professional development series available to all districts,
but again the next year will be a pilot year that will be available to the districts participating
in Race to the Top that have allocated some of their grant to data use as a prioritized
project.
Samantha: Just to reiterate for both Edwin and the educator training and data use, Data
In Action, there are specific parameters for who can participate in that. For Edwin, right
now that is constrained to Race to the Top districts and then with Data In Action itís
Race to the Top districts and within that those who have signed up for that specific
program, which is why weíre going to continue to offer other supports and resources that
can be available to all districts.
Ron: Before we transition to the question and answer portion of the session here weíre
going to take a little bit of time just to preview the district team activity that we
mentioned at the beginning of this session. This was one of the sheets available for you
to download when you logged in today. Iím going to open up the document now just to
orient you to it and we do encourage you to use this document with your team as an exercise
to really start to gauge where are we on the continuum of our understanding and knowledge
of assessment literacy. Here is the tool itself. Again, this is intended to support districts
in understanding where their educators fit overall on a continuum of assessment literacy.
What youíre going to see as you go through this, and just to orient you how to use it,
itís broken up into four main sections, here you see general statistics/data knowledge
and skills, it goes onto general assessment structures, general assessment design and
reporting and then finally linking assessment and instruction. Those are the four main areas
that the tool addresses and essentially youíre going to assess yourselves or your educators
along a three point continuum. The categories again are the same for the four components,
beginning assessment literacy, developing assessment literacy, and secure assessment
literacy. So youíll read the elements here underneath these three categories and begin
to gauge where are our educators along this continuum. Thereís an opportunity for you
to enter some information and brainstorm, what are our current district assessment practices
around this given area and where are our needs, and allow those needs to inform potential
next steps which youíd enter here. So for each of the four categories the set up is
the same. Really this is to facilitate a brainstorming exercise where you can really begin to hone
in on where do my educators need more support around assessment literacy concepts, and our
hope is by participating in the series, district teams will develop some knowledge and some
understanding and some tools that can help them turnkey this information back to their
educators and move them along this continuum. Iím going to transition now back to the PowerPoint
and I think weíre ready to take some questions. I guess before we take some questions Iím
going to preview a little bit about Webinar 2. Again, this is a ninety minute session
scheduled for next Thursday. Youíre going to learn more about the basics of assessment
including concepts such as alignment, reliability and validity. There are basic doís and doníts
of creating assessments. The registration link is here, so you can feel free to after
we conclude to open the slides and click this link and youíll be launched right to the
registration page where you can enter your information. Also on the two pager that we
made available when you logged in, youíll find the registration links for session two
as well as all the remaining sessions including links for those folks who may need special
accommodations, so do take advantage. We had some questions come in and weíre going to
try to work our way through addressing some of those questions now. If you have additional
questions, please feel free to use that chat box. We have team members standing by to check
those questions. If for any reason we are unable to get to your question today, please
do follow up with me directly by email. My name and email appears on the screen here.
Do follow up and weíll do our best to get back to your right away with an answer.
Samantha: One question was the role that District-Determined Measures plays in progress toward student
learning goals. That is definitely true, that District-Determined Measures could play a
role in student learning goals. Student learning goals as part of the summative performance
rating are a much more informal use of student learning growth and achievement, that could
be something like looking at student quizzes, feedback thatís given on student work, homework
assignments, as well as course assessments and potentially District-Determined Measures.
A District-Determined Measures could also be used to set a student learning goal. For
the most part that would be something that might be honing in a little bit more on specific
areas or specific skills, so the student learning goal might be to ensure that a specific skill
or content area is being learned as measured by a District-Determined Measure, and this
may be more applicable for some of those grades and subjects for which there really are not
a lot of other great measures in place right now.
Ron: We also had a question come in about whether or not weíd be collecting information
about the grades and subjects for which only one of the two measures have been identified
and the answer is absolutely yes. We do want to know where districts have identified at
least one measure to begin implementing or piloting during the next school year. So the
template that Samantha mentioned earlier will be flexible enough to allow the respondents
to enter information about one or multiple assessments.
Samantha: We had a question about the idea of a sample timeline for the ideal administration
of District-Determined Measures. For example, the idea of when exactly a measure should
be administered if itís going to provide a baseline or if thereís going to be a pre
and post test. So would this be administered in September and again in May, I think that
is something that weíre going to touch a bit more on and that would vary by whether
something is a yearlong course or a semester long course, so thereís definitely going
to be flexibility there and thereís going to have to be judgment used at the district
level. I think itís a little bit of a tricky question because some measures are going to
lend themselves really well to doing something like a September/May pre post and others are
no. Something might be more of an accumulation over the course of a year, such as a portfolio.
So I would say weíre probably not going to have something thatís a sample timeline thatís
laid out on a calendar, thatís not a resource that we planned on right now, but I think
we are going to be addressing that question of what some of the different options are
and what our guidance might be on the variety of options that you have for thinking about
timing, but really the timeline really has to be considered in the context of the type
of measure.
Ron: We had two more logistics based questions that came in, the first was about the timing
of the webinar itself, the hour from 4:00 to 5:00. Weíre trying to accommodate a wide
range of stakeholders and be receptive to everybodyís needs, but again I want to stress
the fact that the webinars will be available in archived recordings, so if the time is
not convenient for you or your district, by all means take the opportunity to view the
archived recording at your convenience, we just suspect that many districts are going
to take that approach and convene their teams or view the webinars during already scheduled
meetings. The second more logistical question was where to go as new resources emerge. The
DDM web page that I highlighted earlier thatís again on the left hand navigation from our
educator evaluator home page, will be the one stop shop for all of our DDM resources.
So the links to everything will be there as they become available and weíll also be blasting
out information about available resources through our monthly e-newsletter and directly
to our educator evaluation contacts in each district.
Samantha: One last question, because we do want to end on time, we had a question about
the use of MCAS. Iím going to generalize this question a little bit more and say itís
a question about the use of MCAS for grades that are outside of testing grades and subjects
and the specific example here was using the ELA MCAS in additional grades because of the
literacy emphasis in Common Core and that is definitely a strategy that districts could
choose to use. For example, a school wide MCAS score are likely to be part of some administratorís
impact rating, something like a literacy coach or reading coach might use ELA MCAS, so thatís
certainly an option. I think itís something that districts would want to weigh, thereís
nothing in the regulations or guidance that precludes using the MCAS for grades or goals
that have a less direct connection to the MCAS tested grades and subjects.
Ron: We had a follow up question to an earlier one that was submitted about the role of District-Determined
Measures as it pertains to not just the student impact rating, which is what we focused our
discussion on today, but also the summative performance rating. I just want to really
clarify that as it pertains to the summative performance rating, District-Determined Measures
may claim the category of evidence, called multiple measures of student learning growth
and achievement, which is one category of evidence that an evaluator will use to derive
a summative performance rating. So coupled with classroom assessments and other types
of formative assessment and interim assessments that may be used that arenít specifically
used as District-Determined Measures, but are used by educators in the furtherance of
their progress towards their student learning goals, so you could find that a teacher may
end up meeting their student learning goals, but still receiving an impact rating of low,
those are not two mutually exclusive options. The evaluator would have to consider all of
the multiple measures, including those classroom assessments and whatever other evidence is
available in the determination of that goal attainment level.
Samantha: Right. Itís a really interesting question. An impact rating of moderate is
defined as one year of growth in one yearís time. So the point that was being made is
should that be calibrated so that a rating of saying you met your goal means the same
thing as a moderate impact rating, now nowhere in our guidance or regulations do we define
specific parameters for the different categories of goal progress, and I think a way to think
about that is that an educatorís goal may not directly map onto something like those
students making a yearís progress in a yearís time, so I donít think that Ö There could
be nuances as to what exactly the focus of that educatorís goal is, that wouldnít directly
map onto that. I think itís a good sort of general way of thinking about it, but thereís
not necessarily going to be a one to one because of the details of what that personís goal
actually is. It might not be specifically just about progress on that measure in the
same way the impact rating is.
Ron: At this point we are a few minutes past the hour so weíre going to conclude todayís
sessions. Again, if you have any other questions as you move forward with this process, do
reach out to me and my email address is there on the slide for you. As you close out of
the environment youíre going to be prompted to take the feedback survey for which thereís
a link here as well. So please do take a minute and let us know how we did. We really are
interested in making sure that this series is as useful to you as possible, so your candid
feedback is greatly appreciated. Thank you again for your time and attention today and
I hope you found this beneficial and that you leave us today excited to participate
in the balance of the series. Thank you so much for your time.