Tip:
Highlight text to annotate it
X
Ron: So again thank you all for joining us for part four of our assessment literacy District-Determined
Measures webinar series. Today we're going to be spending some time talking about ways
the districts can start to determine the best approach to identifying District-Determined
Measures for the various grades and subjects and courses for which that work is underway
now. Just a few logistics, I know many of you are repeat listeners to our series, so
this is old hat for you, but for those of you who may be joining us for the first time,
you will see a chat box in the lower right hand corner of your screen that you can use
to send us questions. What we'll do is as we go through the materials we'll try to answer
questions as they come in and those that we don't get to we will reserve time at the end
of the presentation to take additional questions at that point. These webinars will be recorded.
Many of you have provided some feedback to us, but it's been helpful to have them uploaded
on our website and available on our YouTube channel. The link there on the screen is where
you can access all of the previous sessions as well as this recording which will take
about a week to get archived and posted on the site. So you should look for that late
next week to download and share with your team. All the materials that you need for
today were available on your way in, including today the slide deck that we're walking through
as well as an Excel tool that we'll dive into at the end of the session today. Those were
the two materials available and they'll also be posted on the website, so you'll be able
to access from there as well. Just to orient everybody where we are in the series, this
is part four as I mentioned earlier. Part five will be happening on August 15th and
that's where we're going to dive into some considerations for measuring student growth.
You're going to notice that there's been a minor change in the schedule. We are actually
going to push part six into October and that will mean that we're adding a January date
in order to accommodate part eight, just to give some more spacing in between the sessions.
Part six had previously been slated for late August, which I know is quite a busy time
for folks in schools and districts, so we're just spacing out the series a bit more. When
it was initially designed it was under the previous implementation timeline when districts
were actually going to begin implementing DDMs in a subset of grades and subjects beginning
in September, so we wanted to try to pack in as many sessions as possible before that
time. Since we have this next year, the pilot year as you all know, there's a bit more flexibility
to stretch it out. So as always this series is designed for district teams that are actually
engaged in the work of identifying and selecting District-Determined Measures. Today's session
we're going to talk about the different approaches the district might take to filling gaps and
we will weigh some of the comparative benefits and challenges that present with respect to
each of those approaches. Before we dive into the content today, I just want to spend a
minute talking about the technical assistance and networking sessions that we held on July
11th. We held three sessions in the East in Norwell, and then the central part of the
state we were in Fitchburg and then at West we were in Holyoke, all the same day, July
11th from 9:00 to 12:00 and these were really meetings that were designed to provide opportunities
for districts to work with one another and also get a sense of how different districts
are thinking about tackling this work. As an introductory exercise participants walked
in the room, we asked them to use [Inaudible]. Those of you who were in attendance, these
charts look familiar to you on the screen here. You were asked to use sticky dots to
sort of track where you are in both your stakeholder engagement which you see here and then your
implementation progress which I'll show you on the next slide. These results weren't terribly
surprising. You can see there's a whole range with respect to how much districts have engaged
stakeholders at this point, ranging from a single person is representing the whole team
all the way to we've engaged representatives at virtually every level of stakeholder in
the district and begun to network with colleagues from other districts. Here as I mentioned
is as a sense of implementation. We were really excited by the results of these sessions.
We've had some follow up conversations with superintendents and other district leaders
who were in attendance and they said you did really leave having a better sense of how
their colleagues are thinking about this project as well as some leads for who they can turn
to for some additional support. Based on that we will be running two more sets of these
sessions, you saw that on the organization slide at the beginning, the next set being
in September, so that will be another time to come together. What will be so timely at
that point is really discussing, diving into district pilot plans, so the plan for the
13-14 school year. So as we mentioned today's focus is really on the approaches to filling
gaps. By now if you've been tracking along with us in terms of the webinar series you
know that as of this point the main tasks for districts is really to begin to inventory
assessments that are being used in the district or that the district has interest in using,
really to get a sense of what's already in place or what might be scalable to a district
wide measure. Of course in doing that exercise there are going to be gaps that arise, so
grade and subjects or courses for which a District-Determined Measure hasn't yet been
identified. So we're going to talk today through some of the different approaches you can take.
We're using these labels, build, borrow or buy. I just want to make sure that everybody
has a sense of what we mean by these concepts because we're going to be referring to them
throughout the session today. When we talk about building an assessment, we're really
talking about starting from scratch and it's developing each of the individual components
of the assessment and those of you who have had a chance to begin to digest technical
guide A, which we released in the spring, have a sense that this is the content from
section four of that guide where we talk about all the different components of an assessment,
so that's really build, build means the district is taking the leadership role and actually
developing those pieces. Borrow by contrast is really selecting an assessment as is from
somewhere else, which could be another district, it could be another state resource, but borrow
is really locating an assessment that is more or less intact and using it for your district
purposes. Very closely related to the concept of borrow is obviously buy and that's where
you're actually purchasing a commercial product which again would essentially provide all
the components of an assessment that you'd expect to see and you'd be more or less using
it as is. Borrow and buy are really the idea, the notions that are teased out in section
three of technical guide A where we talk about how to actually appraise the quality of an
assessment. So we're talking about these three terms, but I just want to make sure that it's
really clear that you're not going to be asked ever to report which of these three options
you're taking, these are really just to provide some context and provide some structure for
today's webinar. So it's not important that you categorize each of your decisions in terms
of whether it was a built, bought, or borrowed District-Determined Measure. These are really
just to help facilitate the conversation today. You'll understand as we go through that obviously
each of these approaches has some significant benefits and significant challenges. Likewise
the district is probably not going to take a single approach for all the of the District-Determined
Measures that you are going to be identifying, in fact what we expect to happen and what
we think is quite likely to happen is that districts are going to be using a hybrid approach.
What that could look like is essentially if you think about this continuum of build all
the way to the left and borrow or buy all the way to the right, the hybrid approach
is a combination of developing pieces and also taking pieces from existing assessments
and using them as is. We're going to talk about some of these, a way to combat some
of the challenges that we raise today when we think through some of the elements of these
approaches, it's going to see logical and this is why we expect districts will be doing
this, that some districts will choose to take pieces of an assessment and modify them or
make adjustments or improvements in order to make them appropriate for that local context.
One example of sort of a hybrid approach would be I know many districts are looking forward
to when Edwin Teaching and Learning is available and being able to use some of the items that
are included in the item banks that will be loaded into ETL and actually create some assessments
through that system. So that would be an example of this hybrid where you're starting from
some existing items but you're cobbling them together in a way that make sense for your
district, so you can ensure things like content alignment as well as some other pieces of
the assessments to really make it work for your local context. For today's purposes what
we're going to do is we're going to walk through the approaches, this idea of buy, build, borrow
or some hybrid thereof and we're going to use these six discussion points to frame the
conversation. So we'll be walking through each of these and really honing in on how
they look given the approach that you might have taken and in order to support me in this
work I'm going to introduce you to Melissa Mowry who's joining us. Melissa is a senior
researcher from American Institutes for Research. We are really thrilled to have her with us
today. Melissa brings forty one years of experience in the education sector, including a lot of
time spent as both a teacher and an administrator. So we're thrilled to have Melissa with us
and as I'll be deferring to her incredible expertise throughout the next bit of slides,
Melissa thanks for being here with us.
Melissa: Thank you.
Ron: So we're going to start by engaging educators. Of course this is absolutely crucial as we've
been talking about since the very first webinar, the importance of identifying a team at the
district level so this work is not done in isolation, but actually there's educator buy
in throughout the process of identifying and selecting measures. How educators are engaged
will be different depending on the approach that you're taking. For example if you're
building an assessment that engagement will very much involve bring educators into the
fold so that they're contributing to that development process. By contrast if you borrow
or buy an assessment, that engagement is going to be more about evaluating these existing
components of assessment to make some determinations, bringing educators into that process to help
make determinations about whether or not those pieces are suitable for the use in your district.
Again that hybrid approach which you'll see permeates throughout these slides reflects
this notion that there will be instances where you may be building pieces of an assessment
and borrowing or buying others.
Melissa: Maybe if I could speak about the build benefits for engaging educators. Maybe
one of the most positive benefits arising from building a new assessment is the learning
that takes place for the educators involved in the process. During the engagement process
educators learn how to network and they form relationships, add to their bank of resources
and they learn to work as a team. One of the challenges, or a few of the challenges that
may occur when engaging educators is the build process has the longest timeline of any of
the other methods anywhere on that continuum that we just looked at. In addition educators
must be identified and available and their participation must be supported. All of that
really takes a great deal of time. The process of learning to write good assessments and
items, rubrics, as well as determining scoring methods or actually scoring takes time and
this learning is very, very valuable, but time consuming and often extensive, which
are two commodities that are relatively scarce. There are benefits to the buy and borrow approach
and we're talking about that one extreme on the continuum of buying or borrowing as is.
So educators still engage with other educators, but in varying degrees depending on the methods
and needs and the assessments. Educators still learn about the networking and forming relationships,
but again not to the same degree. Another benefit is that it is likely to require less
labor and engagement than building an assessment from scratch because of the amount of time
is probably far less than when building something from scratch. A hybrid approach as Ron said
is, is something that probably most of you are going to be involved with the hybrid approach
because depending on where you are on the continuum, you actually are able to use kind
of the best of both worlds if you will. Two examples that might help to demonstrate the
differences that occur, depending on where your district team is positioned on the continuum
is suppose a district team purchases an off the shelf assessment as is, however accommodations
need to be written and included in the administration manual. Well, most likely you already have
accommodations that you like that you've used that might work and it really is almost a
task of cut and paste. However, another example is a district teams borrows existing items
from another district or state database and needs to create new items and scoring rubrics
to cover several benchmarks or content standards. This process will require engaging staff,
training or reminding, writing, reviewing, for the whole content alignment difficulty
piece and revising and reviewing for bias, fairness. So as you can see there's one extreme
which is where you are much more closely related to the build from scratch side of the continuum
all the way across to barely having to do anything to an assessment that's already been
borrowed or purchased.
Ron: Let's turn our attention for a minute to identifying the content for an assessment.
This in some ways is the absolutely most crucial part of the District-Determined Measure identification
and selection process. We can't stress enough how crucial it is that your District-Determined
Measures are really well aligned to your local curriculum because that's going to really
impact how usable and important the data resulting from these assessments will seem to your educators,
which of course has implications for how they use the data to inform their instructional
practices. This again is something that looks quite different depending on whether you build
or borrow or buy. When you're building an assessment you're actually starting from the
initial point of looking into your curriculum frameworks, your local curriculum maps, and
actually building test specifications, so looking at identifying the specific content
that you believe should be represented in the assessment. So the build piece of this
really starts at the initial decision making point. To contrast that, when you borrow or
buy something, those decisions have been made by the test developers and the test developer
has done that work already and so the task for the district team is to then take that
work that's been done by the test developer and evaluate it to determine whether there's
alignment between the test specifications and your local curriculum.
Melissa: And because you're deciding what you need and how it has to be aligned, this
approach provides an assessment that becomes almost the best fit. You have the most comprehensive
coverage and maximum alignment to curriculum. Of course the challenge is that it requires
the most amount of labor because you must identify available and appropriate staff to
create your test specifications and blue prints, and it does require expertise. Benefits of
borrowing and buying are that the existing assessment has already been defined and the
content of that instrument exists, it's a matter of reviewing the content and maybe
conducting a crosswalk of these curriculum frameworks to determine appropriate coverage
of the content standards. A challenge, most likely you have the least content alignment
unless you can find a really, really good match. The hybrid approach, again we're thinking,
I'm thinking this offers really the most flexibility and a couple more examples that might help
you determine where you fit on the continuum or where your district team is. One example,
the district team has found an indirect measures assessment used for guidance counselors in
another district, however they'd like to add to the instrument so that it more closely
is aligned to what the guidance counselors roles and responsibilities are in their own
district, or a district team purchases a commercial assessment that requires additional questions,
provide greater scope, on a particular content standard. So again, all of that really will
differ in varying degrees, the benefits and the challenges, depending on where you are
on the continuum.
Ron: Melissa, I think that second example you provided is a really important one to
focus on for a second. You talked a little bit about the fact that a district may find
a commercial assessment that is almost a perfect match, but there are still some gaps in the
content areas covered by the assessment. I think that's probably why we're supposing
in many instances districts will recognize the need to sort of all somewhere towards
the middle of that continuum and actually start to tailor some pieces, particularly
if they're borrowing assessments from some of their colleagues in other districts that
may not be using the exact same curriculum.
Melissa: They may even be able to borrow a couple different assessments that can be pieced
together, but there still also may be a need to do a bit of writing or building from scratch.
Ron: Let's move on now to defining the actual assessment. As you recall when we talk about
assessment, you remember from technical guide A, we're talking about all the components
that actually come together to create the assessment, so that's not only just the instrument
itself, the data collection tool that the students will be responding or reacting to,
but it's also the administration protocol, the instructions for the assessment, as well
as the scoring methodology which is likely to include some scoring tools like answer
keys, rubrics, scoring papers, so that's really the components that need to be defined. When
you're building an assessment obviously you don't have that starting point, so you're
building all of these components from scratch. When you borrow or buy it's very likely that
most of these components already exist, so again the task for the district team is not
to develop, but to evaluate and make sure these pieces are going to work in a local
context. So this is beyond just the items on the actual instrument, but it extends also
to are the instructions practical for your district, do they address all the different
pieces that your district needs set of test of instruction to address. Maybe it's not
specific enough around student accommodations or something of that nature, so there will
be this evaluation component that needs to happen, even if you are taking something that
is off the shelf. We're really trying to encourage districts to be discerning consumers when
they buy or borrow an assessment, to really make sure that they are scrutinizing the components
and making sure that they're going to work in their context. The nice thing is that districts
now have this next school year, and through the pilot process to really dig into some
of that and make sure that they're really being thorough in their decision making processes.
Melissa: And if you think back on this continuum, as you start to listen to the benefits for
each and every one of these discussion points you start to see that maybe above the build
side, the left side of that continuum, you're actually talking about the biggest benefit
which would be to have the greatest ability to tailor the assessment to your districts
needs. If you imagine that continuum on the right side where you borrow and build you
have a lesser amount or a least amount of ability to tailor exactly the way your district
would like to see it, but it would be less time, less resources, less cost and you might
be able to find something that very, very closely matches what you're looking for. So
really the benefits of building as we said we before and depending on where you are on
that continuum, is that you can in fact more precisely get the kind of assessment that
you feel most comfortable with, because of its alignment with your needs, such as a scoring
method that is congruent with your district capacity, or an administrative protocol fitting
your district technology capability. For instance, do you have enough computers to go around?
So you have a little more flexibility. The greatest learning and skill development for
educators in all phases is really being able to build from scratch and involving and engaging
your educators from the beginning. A few challenges with this method of building from scratch
is that writing good items requires technical expertise, building scoring rubrics takes
time, and of course you need quite a few staff members for the various roles conducted throughout
the assessment cycle. Also, this approach requires the most hours, the most support,
and most likely the highest cost. Defining the assignment again on that borrow/buy approach,
which is over on the right hand side, it's obviously easier and takes less time than
creating all the components. That being said, the number of components of the assessment
you have to be able to find will determine the type of match to your district needs.
Challenges associated with borrowing and buying are that obviously you have less flexibility
since you're using existing components which may or may not be an exact match for you.
The hybrid approach might be the easiest and most flexible for defining an assessment because
you are able to kind of pick and choose what works for you. Remember the closer you are
to build the greater the draw on your resources and we're talking about all of your resources.
The closer you are to borrow or buy, the less flexibility you have for tailoring the assessment,
and that's the commonality that kind of crosses all of these points of discussion.
Ron: I had the pleasure very recently to sit in with the Lower Pioneer Valley Educational
Collaborative who's working in conjunction with West Springfield and several other districts
out West, convene a working group of educators from these districts that are working on more
or less the hybrid approach that you just described, in thinking through what a writing
to text assessment could look like for the tenth grade. So the approach that they've
chosen to take is they had a starting point, so they found a scoring rubric that had elements
they appreciated and felt were thorough and liked, and they spent a portion of their work
session actually looking through the rubric and making modifications to really tailor
it to the set of standards that they had identified as focus areas for them. It's a terrific exercise
for me to witness and for them to participate in, in actually taking something that exists,
so they had a starting point, something to react to, and actually spending some time
reaching consensus on ways to improve it. Now what they've done essentially is they
started down the path of developing a scoring rubric that will ultimately be used by six
or seven districts who are then going to work on identifying prompts that are aligned to
their individual local curriculum, but they tackled a major piece, or began to tackle
it in one meeting, because they had that starting point, they had that piece to react to. Let's
turn our attention now to talking about the process of evaluating both the utility and
the quality of an assessment. Again, this is going to vary depending on the approach
taken, when we talk about building [Inaudible] establishing and utility over time. We received
a bunch of questions about, in particular about whether technical guide A really means
that, and some of the material shared in technical guide A, really means that the build option
is more or less off the table because districts won't be able to produce evidence of quality
right away. So let me take this moment to just clarify that that is not the case at
all. In fact we've seen early work from districts that are taking the build route that has just
been really astounding and impressive and there's full recognition, both at the district
level and within the ESC, that for districts that are choosing to go that route, it's going
to take time to collect evidence of quality and that's okay, but that's what it would
look like at the build level where you're really building that system over time. When
it comes to borrow or buy, much of the information that you might want to know about in terms
of the utility and the quality will have been documented in the technical manuals made available
by the test publishers. Very recently we had an experience where we worked with a teacher
from Lowell who's working on one of the kindergarten assessments, teaching strategies gold, and
through just an hour long meeting we were able to work with her and process the technical
manual and actually complete an example of the assessment quality checklist and tracking
tool that we'll be highlighting at the end of the session and you may have seen as appendix
A of technical guide A, and that was a great experience because she was able to really
process this information that was publically available on the test developers website,
dig into it a little bit, and really get a sense of whether this assessment met her quality
expectations.
Melissa: Great, I'm delighted and as I go on there are some benefits and the greatest
flexibility obviously again this is that continuum to select the data to collect and methods
for evaluation for utility and technical quality and some of the benefits for borrowing and
buying, even though you have to evaluate the information, it's the difference between evaluating
and educating and creating. Challenge, that is the data may have been gathered, they may
have done data samples that do not match your district population, and again that's what
you have to live with if you're going to buy or borrow from scratch as opposed to the hybrid,
but it does sound like those that are doing this hybrid method are doing a wonderful,
wonderful job and those give them greater flexibility on determining what data they
want to collect, what methods they want to use for the evaluation and so on.
Ron: The next discussion point we're going to talk about briefly is this notion of communicating
results. Of course for any of the District-Determined Measures that the district is considering,
there will be an opportunity for results to be communicated, first and foremost to the
students who have completed them. That's a great source of feedback for students obviously
that they can see the scored product of their assessments, as well as to the individual
educators, is the data that these assessments are producing, is it useful, is it actionable,
does it help them improve their practice. So when it comes to the build model, the district
team really has to determine the different types of reports that are appropriate and
that the district wants to see put in place. Much of that work again will have been done
for you if you choose the borrow or buy option approach, specifically the buy, many of the
commercial products that districts have told us they're thinking about using, they come
with very detailed reporting schemas that the district receives as part of the overall
package.
Melissa: One of the biggest benefits I believe from building from scratch is that the district
team is able to determine the whole who, what, when, where, why and how of the reporting,
anything from the format, color, the language that you're going to use or even multiple
languages, who your audiences are going to be, this results in the greatest flexibility
to create reports that are contextually relevant and appropriate for your district, and they
result obviously in more learning and skill development for the educators. Challenges,
this is really labor intensive. It's probably the most costly and you do need technical
expertise, just for instance like programmers if you're going to customize your reports
or graphic artists for helping with layout and design and printing and that kind of thing.
Benefits of borrowing and buying, it's just as Ron mentioned, there's quite a bit of reporting
that very often is already attached to those assessments and then of course the hybrid
is being able to combine maybe the best of both worlds where you can even add additional
reporting if in fact that's what your district team feels that you need
Ron: There is such potential here for promoting growth and development within the educator
workforce when we talk about communicating results. Those of you on the webinar who participated
in last week's technical assistance session in Norwell, you heard from Jamie Lavillawa
(SP?) the executive director of instruction there how he's been thinking through the DDM
project with his educators and in fact Norwell has a year of piloting under their belt for
a wide array of educators and he's spent considerable time and investment on this notion of communicating
results and in fact has used this as an opportunity to provide wonderful professional development
for his educators. It's really gotten them thinking in new ways about the assessments
that they deliver to their students, everything from looking at vertical scaling in new ways
to really having them fully realize on their own how the different scoring mechanisms they
were using were contributing to [Inaudible] effects, and just really wonderful outcomes
that have arisen from conversations around how we communicate results and how those results
should impact the eventual improvement that you're making to your assessment tools. We're
going to transfer now to talking just briefly again about monitoring DDMs over time. Because
this is a project that's obviously in the development stage right now, there are certainly
going to be a significant degree of monitoring that will need to take place, specifically
during the next school year as districts go through the pilot year, but then also on an
ongoing basis, there will always be a need to maintain and potentially revise and improve
the various assessments that you're using. The assessments that the district is building
or has built, this will include gathering data to assess quality from the administrations
that have happened in the district. For commercial assessments or assessments that you're taking
from other sources, obviously monitoring is still required and still important, but you'll
have some starting point.
Melissa: The greatest flexibility to collecting and evaluating data appropriate to your assessment
is to be able to build that in and determine along with everyone in your district team
what it is that you want, when it is that you want to collect it and how often, what
protocol will be used to supplement, support, revise and continue to build your assessment
if in fact that's what you do. The challenge really probably the most labor intensive,
again very similar to the other challenges that you see when you build from scratch,
because no data existed to use as the baseline. Borrowing and buying, sometimes you have very,
very good monitoring and maintenance guidelines and protocols and sometimes less so, but when
you buy them you have something to work from and that's what leads you into that hybrid
approach where you really have the flexibility to change depending on what you need to change
in order to make that assessment whether you bought it or borrowed it work for your district.
Ron: We did receive a question from one of our listeners about whether we could make
any recommendations of other states that have developed some exemplary District-Determined
Measures-esque assessments. We hope to be that state quite frankly. We are as I think
you all know, in the process now of beginning to collect assessments that are in use or
being developed by our districts with our partnership with WestEd, who will be helping
us evaluate those assessments and ultimately make them available as a suite of exemplar
DDMs. So that work is very much underway and we expect to be able to deliver that posted
list of exemplars to our districts in early September and we hope that will give districts
a really strong sense of options that exist for them as well as some models for where
maybe our exemplars don't quite fit the mark in terms of alignment with your local curriculum,
but it gives you a sense of different directions you could take to developing a whole myriad
of different types of assessments, because that's a critical point too. I don't think
any district wants to see a litany of bubble tests being rolled out to all students in
all grades and subjects, so our exemplars will reflect sort of a balanced approach of
traditional and nontraditional assessments including portfolios and performance assessments
and the like, so that work is coming. Right now as far as what's on my radar, I do know
and can share that Colorado has done tremendous work in this area and has a database posted
on their Department of Education website that is basically a database of available assessments
that can be filtered by specific course or grade and subject, so that would be a place
to check out as well. We're now going to move into talking briefly about the assessment
quality checklist and tracking tool as a vehicle for cataloging your potential DDMs. No matter
what approach you take, whether you're building something from scratch and you're very much
in the infancy or whether you have already identified some commercial assessment that
you want to use, or whether you're leveraging your colleagues in other districts to help
fill gaps in areas where they have significant expertise, no matter what approach you're
taking, it's really important that you document everything that you've got going on right
now so that there's a record of not only the assessments that you're considering, but sort
of the relative pros and cons of those assessments and we're very much hoping that the assessment
quality checklist and tracking tool will be that tool that you can use and will be helpful
to you in documenting your work to date. So this is a variation of the tool that we introduced
in an earlier webinar, it's been refined based on your feedback and was then later released
as an appendix to technical guide A, but it occurred to us that we hadn't yet featured
it on a webinar and we wanted to make sure that folks were using the current version.
So here you have the link directly to the tool and a quick snapshot. I'm going to now
share my desktop and I'll navigate to the tool just so I can orient everybody to how
it works. Here we are in the tool itself, the initial page here is just a cover page.
There are instructions on the second tab that you can take a look at. For today's purposes
what I'd like to do is spend a little bit of time in the checklist itself. The key thing
that I want to stress here is that regardless of the approach that you're taking, whether
you're building, borrowing or buying, it is quite possible that you will not be able to
at this time complete this checklist for every assessment and that's okay. You should start
by filling in the fields that you know already and then know that some of the fields, especially
where you're building an assessment and it's going to take time to actually establish some
of the evidence needed to complete the checklist, that it's okay. I also want to point out that
there are weights that have been assigned to the various categories. For example, if
we quickly just use say MCAS as an example, let's say fourth grade ELA and we have MCAS,
source is ESE, the type of response, you have a dropdown menu, call it an on demand assessment,
items types call it selected response. The first category of work that you need to complete
after that initial intro stuff is whether or not it's aligned to the curriculum, as
we said that's critically important. So you'll see dropdown menus where you can select the
extent to which it's aligned. Now you'll get a score down here. The score represents the
weight that ESE supposes districts would place on these features, but these are in no way
shape or form requirements. These are just guidelines. If the district wants to tinker
with the weights and change them, that is entirely the districts prerogative. We just
felt like it would be important to give some guidance in terms of weights that we would
suppose the districts would place on these items. As you scroll down you have additional
fields relating to assessment quality. Again, you'll see right up front these notions of
both utility and feasibility. Utility meaning is this assessment going to yield information
that my teachers can use and it's going to help them improve instruction, feasibility
meaning does this assessment actually fit within our school program needs, our costs,
our technology needs, etcetera. So again, those are simple dropdown menus that you can
populate and get a score. Now, all of these cells in column B that have the red arrows
in the top right corner will provide additional information if you hover over them, so you
can get some context for what's being asked, and all of this is tied back to part two of
the webinar series or technical guide A. You'll see references to one or both in each of these
red arrowed cells. Once you've entered all the information, let's say you can't get past
this orange section, you might know that it has a table of test specifications, maybe
it's something you're building and it's in process, let's say the administration protocol
is moderately developed, but the tool itself, the actual test is really well developed,
it's got a scoring method and your technical documentation is in development or even missing,
something you're still working on, the key feature of the checklist is this button at
the bottom, once you click this button it's going to automatically shift all of your entries
from the checklist into the tracker and the tracker is going to show you everything you
just entered for that particular assessment in the next available row. So the district
team can continue to use the checklist for each of the different assessments you're considering
and again it's okay at this time if you have some blanks and when you open the tracker
you'll see them all cataloged there, which will then allow you to sort by grade, by subject,
however you want to sort it, so that you can start to get a real picture for the potential
DDMs that you have available to you and it will also help you conversely, highlight any
gaps that you might have, which will allow you to direct some resources toward identifying
potential DDMs in those areas. So we very much want your feedback on this tool and we
hope you'll use it and find it helpful. If you have any questions at all about it, do
feel free to email me directly. We're going to transition now back to the slides. We're
going to close with just a few notes on the pilot itself, so as you all know districts
will be engaging in some degree of piloting of District-Determined Measures during the
next school year, during the 2013-14 school year, so it's really important to have some
focus as you go into each of your pilots or each of the different DDMs that you'll be
piloting. There are all sorts of really great terrific benefits that can come out of a pilot
test and again, this will somewhat be framed by the decisions you've made whether you're
[Inaudible] an assessment from scratch, borrowing or buying one or you're taking that hybrid
approach. For sure one thing that will happen as a result of your pilot is that you'll have
a chance to test the administration protocol and that will help you flag any issues that
may arise. For example, you may find that your protocol whether it was developed in
house or whether you are co-opting it from an existing assessment doesn't provide enough
information about accommodations for students with disabilities. That will be something
that through the pilot process you could flag and rectify prior to full scale implementation.
Another potential benefit is it allows you to test out the scoring protocol, the scoring
process, which will allow you to start to tackle, especially if you're choosing to assessments
that might involve a scoring rubric of sorts, really tackle nuances within that rubric,
get a sense of calibration due to multiple raters, score the same product in the same
ways, can you improve the language of the rubric to try to improve that calibration,
so a lot of kinks can be worked out with respect to the scoring process. One district that
I've been working with, they're thinking about employing some double blind scoring processes
and they're thinking about potentially piloting some of that on a small scale during the next
school year, so that will be the type of scoring process that you could try out next year in
a no stakes environment. You can also test the reporting methods. Again, there will be
data that is created as a result of you administering these assessments and you'll have to make
some decisions again regardless of the approach taken in order to figure out what the end
user reports should look like for both students and teachers of course, and to make sure that
first and foremost the reports are providing information that's useful, so something you
might consider as part of your pilot is meeting with your educators who are using the assessments
regularly after administration to see how they're able to use the reports to improve
instruction, see whether that's possible and if not what recommendations they would have
to improve the reports for the next go around. Finally, you can actually test some of the
items on the instrument and you can start to get a sense of whether the items are performing
the way you had intended them to perform and whether or not modifications might be necessary.
So, the scope of your pilot, the purpose of your pilot may vary depending on where you
are. If you're piloting for example a commercial assessment the district has been using for
a decade and you're piloting it for the new purpose of using it as a District-Determined
Measures, you're coming at the pilot with that particular lens and you need to really
start to think about as a district team, what do I want to get out of this pilot, what information
do I wish I knew about this assessment that I don't currently know. Conversely if you're
building something from scratch where you're very much in infancy and you haven't tested
out any of the components of the assessment, the pilot is a huge opportunity to really
begin to capture some information that's going to help tell you if you're heading in the
right direction as you continue to refine that work. So it's really important that you
have in mind a very clear goal at the district level for each of the pilots that you'll be
running during the next school year. I'm happy to take some questions now. We have about
ten minutes left to go before we hit our end time of 5:00. So if any of our listeners have
questions, please do feel free to type them into the chat box and I'm happy to address
them. While we wait for questions to come in, just as a reminder we will be proceeding
with part five of the webinar series which is considerations for measuring student growth.
Part five is going to be very closely aligned with technical guide B which will be coming
out around the same time. Tech guide B is going to focus on some basic steps districts
can take to begin to make determinations about student growth and it's going to be very deeply
rooted in examples from actual Massachusetts districts that have started to engage in some
of this work already. That webinar will happen on August 15th and will run from 4:00 to 5:00
p.m. The registration link is here, it's also on our DDM webinar page as well as the registration
information for all of the subsequent sessions. Just a reminder again, note that webinar six,
in order to give districts more time between sessions has been pushed off to the slot that
had formerly been reserved for webinar seven and then the whole series has been shifted
in that way. If you have any questions at all about the District-Determined Measures
project I do want you to email me directly. My email address is here. As you exit the
webinar this afternoon you will have an opportunity to complete a really brief feedback survey.
We hope you take that time and let us know how we're doing with the series, what you'd
like to see, ways we can improve. We do take that feedback to heart. I have really enjoyed
engaging with people around puzzling through how they're planning for the pilot next year,
so do feel free to contact me with any thoughts, ideas. We've now found a whole host of District-Determined
Measure ambassadors across the state and it's very much come from people that have just
reached out and emailed us to tell us what they're doing. Also, I want to make a couple
opportunities to make you aware of them, on our website now there is information about
our work with WestEd that will result in the identification of example DDMs and what you'll
find there right now is information about core course objectives that panels of Massachusetts
educators have been identifying for grades and subjects for which our curriculum frameworks
don't provide grade specific standards. The core course objectives are going to be WestEd
cues for the exemplar assessments that they then pull for us and there is an opportunity
for folks to provide some public comment on those core course objectives. So please check
that out. Thank you all for your time and attention today. We hope you found this information
useful and we hope you return to your districts energized to continue with this important
work. Thanks very much for your time.