Tip:
Highlight text to annotate it
X
Jeffrey Philpott - Using Common Rubrics to Assess GE Outcomes
Jeffrey Philpott: Hello everybody. So, as Rick mentioned we’ve gone through a process
of revising the core curriculum at the same time that we have implemented Canvas. And
I will talk about how we are trying to do the assessment and our general education program
as part of that.
First, very briefly about Seattle University - Seattle University is in Seattle, of all
places, it perches on a hill above downtown Seattle. It is a comprehensive private university affiliated
with the Jesuit order of the Roman Catholic Church. About 8,000 students, about 4,000
of those students were undergraduates or more than 4,000 students were undergraduates. The
rest are graduate students obviously. Of all 4,000 students who are undergraduates about
97% of them take the core curriculum. So there are two small programs where students have
their own version of the general requirements and everyone else takes the common general
requirements that I manage right now. And so it’s a very large and significant program
as we go through.
We’ve done two things here over the last two years. One, we’ve implemented this brand
new curriculum and two we have transitioned from Angel to Canvas. We don’t really have
fun. These are very challenging on both fronts but we’ve also seen opportunities for some
kind of synergy and some sort of connection between the two of them as we worked together
to try to make things more effective for us, particularly from my end how can we assess
a complex large general education program such as our core curriculum.
Let me step back though the first where we are coming from the core curriculum
for our GE program because that’s part of this. The old core curriculum was very much
disciplined-based. You can see by the codes up here Philosophy, English, History, Political
Science, Mathematics, etc., and those courses were very much owned by the individual departments,
the individuals, the disciplines that offer those courses and so it was really very difficult
for us to coordinate the process because one department wanted to do their thing and other
department wanted to do their thing. We do not have good learning objectives. We did
have learning objectives. They appeareded magically out of thin air about a month after the accreditors
came visiting. [laughter] Nobody knows who wrote them, nobody knows who vetted
them, there was no effort to go back and change the curriculum to have them match the learning
objectives but we have learning objectives suddenly.
But for assessment purposes as you might imagine that was very difficult. It also was difficult
because we didn’t have clear guidelines for these courses. So for example, our Mathematics
requirement under the old core, the official description of that requirement was and I
will quote, “Math 110 or above.” [laughter] So what we're supposed to be teaching in that course, what
the learning objectives are, we have no idea whatsoever. Assessment was really a mess
to be able to do that. To add one more piece to that many of these courses did
double duty, that is that they were first and foremost major courses that other students
were brought in to make the courses more viable of course so that those students could get general
education credit. So many of the courses really weren’t GE courses to begin with,
they were courses designed for the major that were then put into the GE program. So we went
through a significant process. Anybody been through a major revision of your GE program
on your campus? Hey isn’t that a blast? We spent five years doing that and we’ve
come up with this. So this is the new core curriculum smaller than the old core curriculum
and I won’t go through all the details of them though we’ve made some major changes.
First, each of these courses has a clear set of guidelines for that course so there is
a paragraph. There is at least one paragraph plus a set of common pedagogy elements, plus a
set of learning objectives for each and every course that have been vetted by the faculty
through a two year process to developing them. Notice also that the courses all had a common
code, UCOR for university core. We’re trying to take those out of the province of the individual
departments that have a more unified and coherent curricular program across the entire spread
for the students to go through them. We built these around learning objectives and those
learning objectives like I’m going through all of the details of those.
We have four categories of learning objectives. There are four major learning objectives here
then we have some sub-points under these and we’ve broken them down into knowledge, skills,
and values at each of these areas giving us approximately 30 specific sub-points underneath
these four learning objectives that we’re trying to teach to and were trying to assess.
An assessment from my perspective is critical to the success to this program. I believe
very strongly the primary purpose of assessment is for us to be able to improve the curriculum.
We have to find the places that it doesn’t work. We have to find the places where there
is opportunity for improvement, and so we’re trying to build an assessment system that
is built first and foremost to provide feedback to the faculty, to the university on what
is and is not working, what students are accomplishing and what they are not accomplishing so we
can move from there. Of course the university is also very interested in this for accreditation
purposes so we’re trying to sort of please both masters as we do this.
I put together a committee last year to look at how to do this and come back with recommendations.
This is at the same time we were implementing Canvas on campus and they came back and recommended
that we use the rubric function inside Canvas to be able to do a significant part of this
assessment. So I started chugging along down that path trying to figure out how to make
that work. It's complex as you might imagine. So how we are trying to do this? Well first
as Rick mentioned this is a large complex program. We really do have more than 200 faculty
members teaching in those programs. We uploaded 600 sections a year. They are from five different
colleges and schools, a whole bunch of different departments and like many universities we
are side-loaded. So getting the folks inside the engineering to collaborate with the folks
in nursing to collaborate with the folks in our small experimental college
that has been a challenge to be able to make all of this happen and to be able to work
across those lists.
One of the things that’s significant here is if you look at some of these course we
have courses here that are taught by faculty from multiple disciplines. I will give you
some examples here in a moment. For example, in our humanities courses we may have courses,
these are, again these are courses that have common learning objectives, common pedagogy,
common descriptions but they have radically different topics so we have faculty teaching
these courses in the Humanities from Literature, Art History, History, Rhetoric, Philosophy,
Theology, etc. You can go along down the list. Even in a place like Composition or Mathematics
or Quantitative Reasoning requirement those are not simply taught off by one department
they are taught by a variety of different departments.
Now, let me add one more layer to make this all the more complex because we’ve to make
up a really difficult course. In each of these categories zeroing in on a particular
example here. The UCOR 1800 course has been quite similar as in Natural Sciences. Not
only do we have the different disciplines involved we have radically different topics
for each of these sections. So across all of the different sections here’s some examples
of sections from the School of Natural Sciences. We have a faculty member
of Geology teaching a course on how the Olympic medals were formed. We have a faculty member
of Biology teaching a course on how predators have affected evolution in history of the
planet. We have a course on the genetics of disease in Biology. We have the courses on
electricity out of Electrical Engineering; sound and music out of Physics; fats in our
diet out of Chemistry etc, etc, etc.
So what that puts us in a situation in that we have a common core learning objectives
or outcomes for all of these courses but then of course the faculty member who is teaching
the course from the physics of music has different kinds of specific learning objectives for
that course from the faculty member who is teaching the course on the formation of the
Olympic metals from the foundation of glaciation of volcanoes. And so we have
both sections specific learning objectives as well as the common learning objectives
and so what we’re trying to figure out how to do is to be able to assess all of those
across the entire curriculum. So how are we doing that? We started by developing rubrics
and we did this by, we’re using some of the work that we’ve already done with our
core assessment committee for some time that developed rubrics on the old core and there
was pretty hard to assess even there because those are the foundations. We also used some
of the stuff and if you are familiar with AAC&U's value rubric project, if you are not I highly
recommended they’ve done a whole lot of work in developing interesting rubrics. We
did not adopt any of those rubrics in whole but what
we did was we took the
language of those we shared them with faculty members and we have faculty members work with
those to be able to develop some kind of sense of a common rubric that work for us. We’re
still in the process of doing that. We are on Version 2 of these rubrics now. We've developed
them. We've vetted them with faculty. We’ve gone back and revised, we'll do at least one
more round with that process as we go along.
So here’s an example of that. This is the critical thinking rubric, one of our learning
objectives or sub-point of our learning objectives is critical thinking so we’ve taken some
of the language from the value rubric and we put in a descriptor there. We’ve done
this for each of the approximately 30 specific learning objectives that are part of the core
curriculum learning objectives. So we have a set of descriptor out there and then here
right now trying to use the common scale, a common rubric inside Canvas so that each of
the learning objectives the faculty members are using the same scale as opposed to writing
in the specific characteristics in each of these cells. Now we’re trying that that
may or may not work in the long run for us. You see the advantages and disadvantages in
both ways.
We developed these learning objectives and tried to do two things very carefully here.
One we try to identify what we consider minimal mastery. At what level would we be embarrassed
to have student walk across the stage [laughter] but we wanted to set that fairly low on the
scale because we would want what we aspire to what we think of as aspirational achievement.
So we purposely have written these descriptors towards a very high level recognizing that
a large number of our students may not make at all the way to the Level 4 on the scale
but we won’t have that room to be able to continually push students to achieve, push
them to master as they move to this process. So for those approximately 30 learning objectives
we developed one of these for each of those learning objectives.
Now next, I know you can’t read this let’s just look at the X's for now. The grid represents
the learning objectives along the left-hand axis, the courses along the horizontal axis.
And so every place there is an x this is the place where we have identified. This is the
course where we can assess this particular learning objective. You’ll notice that some
of those rows have more than one x and we’re trying to assess these both from the freshman
level and as well as a junior or senior level down the rows. This is the four year curriculum,
it’s not simply the first and second year curriculum that we’re working through. So
we try to assess these and so each course when we go down the row, each course has a
certain set of learning objectives that we are responsible for assessingt. So what we did
then is we developed packages of rubrics. For those particular learning objectives that
course is responsible for it so we took the individual learning objectives, we put them
together into sets, populated them in an account level into Canvas so that the faculty member
has a package ready to download with all of the individual learning objectives that are
to be assessed in an individual course. Here is an example, so this I know it’s hard
to read as you can’t see the scale. This is for a freedom of expression course, our
art course if you will and so there are four learning objectives that we’re trying to assess in
these courses so all of those rubrics that were already into the system ready to go so
that the faculty member can download those and use those for assessment. How is that
supposed to happen? So here’s the cycle we were working on and we were still on the
pilot phase of this piece, we’re just trying to make this work. So first we’re asking
faculty members to identify an assignment that they are already doing in the course
that would demonstrate these learning objectives. We’re not asking them to develop new assignments,
we’re not asking them to use assignments just for assessment purposes and again there
is a sort of advantages and disadvantages both ways, we’re trying to ask them to use
an assignment that is already an integral part of that course. To identify an assignment
then to import from the account for the core the package of rubrics possibly rubric package
appropriate for that individual course and then go in and add their section specific grading
criteria rubrics on to that same rubrics. So we have one rubric that has both the core
assessment rubrics and then we have a set of rubrics for grading in that particular
course and then they grade using Speed Grader or Canvas or whatever, we then are able to
collect that information and I’ll talk about that in a minute.
So here’s an example and I know you can’t read these all but I just sort of folded two
pieces of it. The top part are the assessment pieces and if you can sort of see that there’s
no numbers on the far right-hand column we have purposely set this up so that faculty
members do not use them for grading. We’ve learned earlier on either everybody has to
use them for grading in the same way or nobody uses them for grading. So we have defaulted
to the no one uses them for grading, they don’t count for the grading the course at
all for the faculty.
And then the second piece is just the lower half. This is for an assignment who are on
my courses. You’ll see a set of learning and a set of rubrics that are designed for
grading an individual assignment. They are very specific to what that course is about
and what that assignment is about. So we are asking faculty members to grade these using
both sets of rubrics at the same time so far faculty members are finding this is very quick,
it is not taking them very much at all and we’re able to pull the information out for
that. So how do we do that? We have all of these individual sections that were out there
doing assessment and get what is called the pilot phase in this process. They’re doing
this
assessment work, they’re grading, they’re filling out the rubric for the assessments
as well as the grading rubrics at the same time and then we’re trying to collect those
and sort that out by learning objectives so that we are compiling information from a variety
of different courses around the learning objectives that we are using for assessment purposes
and then we will take that data across a wide variety of sections, a wide variety of students,
and then take assessment, get assessment data from teams of faculty members look at those
data and say, “What is this in terms of how our students are doing in this particular
learning objective?” We are being very careful to not, at this stage, tag or track database
on the section or the faculty member involvement. We have to do this in a way that faculty member
help us find the words. As soon as there is any hint to a faculty member
this is part of their evaluation that how well their students are doing, this part of
how their success at university, their rank in tenure, the renewal, whatever happens
to be, we will have the lake woebegone effect and every class will be producing wonderful
students on this rubrics and so we got to do a way to separate that out.
Now the data is going to go on to the data warehouse, the institutional research and
at some point we may start to look not at faculty, we’re not
going to do that but to look at the individual students who try to get a sense of how our
transfer students do differently than our native freshmen, how our students who are
in this program compared to that program. Etc. So we are able to aggregate the data
in various ways based on individual students, but we are really trying not tag them at
all to the individual section of the individual faculty member as we are doing this.
There are some challenges. I think this works and we've got high regard, we’ve collected
good data so far. We’re learning something from those data. It’s still a little cumbersome
and faculty are struggling with this in some ways and what are the ways that they’re
struggling with is it’s kind of backwards in terms of the process that they are starting
with the rubrics for assessment which aren’t the rubrics that we are using for grading.
So it’s the assessment tail wagging the pedagogical dog in a sense that we’re asking
them to lead off from the faculty instead of leading off with how they would normally
grade the assignment. Relating to that if they are going to use the core rubrics then
we have this by default set to not grade it, to not help towards the grade, if they won’t
use those core rubrics as part of grading the assignment they have to duplicate them
manually, they can’t simply reset the number of points that rubric is associated with because
you can’t cant edit them once you use it at the account level is my understanding.
The third is a little clunky right now to cut and paste rubrics and rows and you bring
the whole set of rubrics on. So those were some of the places we’re getting resistance.
Rick and I and some other folks have some really good conversations about that, Instructure
is very interested in this and trying to figure out some ways to be able to make that work
so I am very hopeful as we move forward we’re going to have some kind of options as I have
shared with Rick my dream here is if we can somehow tag multiple rubrics to a single assignment
I will come bow down in front of the panda on stage to be able to have it. We
can make that happen, this is going to make a whole lot of our faculty members happy.
Now that may not be the right solution. I’m certainly not the person who’d be able to
do that but we have something here I think is a technically very functional solution
in terms of faculty resistance and you know our faculty are sometimes very resistant to
assessment in general, resistant to anything that is change-oriented, we discovered that
in the last few years. We’re trying to find ways to make this as easy and smooth and as
transparent to the faculty is what’s going to be the key for long-term success for those
kind of program.
So we’ve got this curriculum under way. So far the curriculum has been a
great success. We have migrated to hundreds of courses from across the entire university
from Canvas or from Angel over to Canvas, by and large the faculty members were very
happy with Canvas. Certainly from my perspective I have an administrative role, I’m dealing
with the assessment hand of it, I am very, very happy with Canvas and their tools and
processes to be able to do this kind of assessment. We’re so far ahead
of where we were even a year and a half ago in terms of assessment plans and Canvas is
one of the things that’s really making that possible for us as we go forward
So I think I’ve got a few minutes left and so we stop here and see if there are questions.
I may not be the right person to answer all the technical questions because I’m not
the technical guy behind this. So I may call on a couple of colleagues here in the room
if that’s the kind of question we have.
Audience: How did you get in to have zero points? We've been playing around with the rubric
trying to do exactly what you do and we can’t get it to not have points. We haven’t been
able to do what you just showed us, could you tell us how to do that?
Jeffrey Philpott: I have to - any idea? It was easy. I wrote this was, I remember
what’s really easy was there’s some place you just check, it doesn’t count for grades.
Audience: I think one of the nuances that maybe you’re experiencing is that we used
the similar architecture and I think the issue is maybe if you can all pass this on is that
what when you set an outcome as zero points you still have to assign points to a mastery
level, in other words - am I explaining this right? Someone jump in. In other words if I
want needs expectations to be the mastery level at three or something like that,
if I set those to zero points it will change the rating scale to be in alphabetical order
And therefore it's counter intuitive. I've had
faculty members looking for the exceeds expectations and after that, right?
So that you cannot and you have to still assign points or grade to the performance level in
each of those criteria.
Jeffrey Philpott: And we want the points at each performance level because those are the
data that were collectable. I mean we want to know how many students are at a one, how
many at a two, how many at a three, and how many at a four. We just don’t want to force the
faculty member to use those points as part of calculating the grade so we do have points
attached to those points that are not being used to calculate the grade from the assignment.
Audience: I think our problem is that students get confused that they wonder I got three
points and then how come it wasn’t added into my grade. It’s something our faculty
keep telling they have to keep explaining
Jeffrey Philpott: We’ve got that same pushback that the comment was that students were confused
by that. We’ve had exactly that same issue is that because these are on the rubric that
the students see and they are the first things on the rubrics for students to see that it’s
hard to tell the students well don’t pay any attention to those that’s for assessment
purposes. So we are asking the faculty to give them some talking points for the faculy
who are part of the pilot project to say these are part of overall goals of the curriculum,
they are not grading their individual assignments but it’s still confusing students.
Audience: I have a question about the assignment, the rubric package. I can imagine situations
where there is not one single assignment that would actually be able to evaluate all of
those goals or rather that they would be evaluated over multiple assignments. Is it in an all
or nothing thing or I download this rubric packet and I then I have to sort of fill in
something even though it’s not applicable?
Jeffrey Philpott: No, it is possible to leave more of the rows blank or just to fill in
the zero and not count that as part of that. What we’ve done,
the packages around the assignments that are supposed to be in each of those sections
so even though the media section in Geology and the section in Biology they are all supposed
to have a similar kind of an assignment where in theory at least we should be able to assess
all of those different learning objectives from that packet. Now that means we’ve got
to keep the faculty members on the guidelines and doing those kinds of assignments and we’re
still not very far down that path.
Audience: What’s your plan for sharing the data across the different units?
Jeffrey Philpott: We will produce a report annually for all of the faculty in the core
so we do it with the university, we’ll do it with all the college and teaching the core,
we do it with all of the departments teaching the core, we do it with all of the faculty
who are teaching the core so that's something we haven't done well in the past and we will be an annual
report, the first one will be this next fall, all faculty sharing all the data that we’ve collected in the
year as well as what committee of faculty members that work for me have put together
for some preliminary conclusions. We’ll then follow that up with a session that we
have open session for all faculty members to come discuss the meaning of those results
and look at implications to try to figure out what next steps we need.
Audience: I was just wondering if you could summarize what you like and don’t like
so far about the reports that you’ve if you’ve got if you've gotten any yet, the reports you give
the admin level after doing this exercise.
Jeffrey Philpott: The question is what do we like about the reports. We’re actually not using…what
I am not using at my level is micro standard Canvas reports, stuff is being dumped out
by two of our folks, they summarize that in the spreadsheet and I worked with the Excel,
so I couldn't even answer the question. We’re crunching numbers in a spreadsheet.
It’s a big program but it’s a small office
so [laughter] I'm it.
Audience: This is more about your reorganization of general education, one of the things
that we’re struggling we’re trying to do almost exactly the same
thing you did to your Canvas etc. is how do you have a curriculum on that for general
education because there are so many different tracks or do you even try to do it because
you’ve got so many different directions students are going?
Jeffrey Philpott: First we have eliminated the old tracks within the gen ed program so
we used to have several tracks within the old core curriculum. I have been a firm believer
that the gen ed should be the gen ed should be the gen ed even though there are some different
courses so this document here is based on our curriculum map. So we have two committees,
the design committee and the implementation committee. The design committee left us with
a set of learning objectives and a set of courses that didn’t go quite far enough
as far as I wanted them to go and so what I did with the implementation committee is our
first half has to integrate those into our curricular map. So we’re spent about two
months putting together a fairly detailed curriculum map showing for each course what
learning objectives were supposed to be taught in that course.
Across the whole curriculum. And then we embedded
that with the faculty in a set of workshops. and we haven't gone back and used that document very much but for me
it was critical that we have the curriculum map before we wrote the guidelines
for each course so we knew where to assign learning objectives.
It’s not a very sexy, glamorous,
interesting thing for all the faculty but really it’s the backbone and the structure
of the curriculum.
Audience: Two questions - a quick one I saw at the very beginning you have four modules.
Do your students choose two from these or that just they just have at it?
Jeffrey Philpott: They do the whole thing. They do all four modules.
Audience: All four modules? All courses in it is…?
Jeffrey Philpott: Well actually that’s not quite true. In this module here this box here
students do two of these three, depending on what their major is, they do the two that
were from outside their major area so if someone is an English literature, major they will do
in the social science and medical science, there are someone who’s Chemistry major
with humanities and social sciences etc, otherwise they do everything
Audience: Okay, 14 courses they are they do them all?
Jeffrey Philpott: Yep.
Audience: My real question is how has this information gotten back to the faculty as
you’ve done these assessments for the improvement of learning?
Jeffrey Philpott: It has not yet.
Audience: You’re not there yet?
Jeffrey Philpott: So that's the report that we’ve submitted to faculty in the fall
on what we have been doing this year, so we’ve just been starting this assessment this year,
because last year was the first year of the new curriculum. We’ve done assessment in
a pilot level and several quarters this year that in a couple of different methods
that will get summarized the report back to faculty and then do the foundation of the
faculty workshop in the fall.
Audience: I'll be interested next year.
Jeffrey Philpott: Me too.
Audience: You said right now that faculty are assessing their own students on these
outcomes, do you have any plans to have those done by faculty outside?
Jeffrey Philpott: We have a two-tier assessment process, so what I’m describing here is what we are calling
tier 1. Tier 2 is we collect samples of student work and then we have interdisciplinary
teams of faculty lead that work and score it against the same rubrics. We do that in the
course of the summer, we expect the faculty would take part of those teams. We do norming
sessions for those faculty. Faculties are not looking at student work from their
own course or curriculum disciplines, they are looking at students who work from other
disciplines as well because here we are trying to evaluate just the core learning objectives.
Not the specific knowledge issues.
Audience: I think just allocating the assessment of those examples through Canvas as well?
Jeffrey Philpott: Not yet [indiscernible] [0:27:42] so far.
Audience: Just going back to that question - norming, you said that was going to be something
that was in tier 2. In the tier 1 your faculty are collecting assessments, they’re measuring
their students’ performance against these core objectives, what kind of support were
those faculty members given whether by you or your department or within their departments
to norm that student in History 101 in this section and that section are being assessed similarly
Jeffrey Philpott: So far our focus is on training them in the process and I work with the people
who are helping this implement in Canvas as well as myself, we team up to do a training
session for them and we’re also going to have to do some more in depth learning here
as well which we’ve done in our old assessment conference as well.
Thank you very much everybody.
[Applause]