Tip:
Highlight text to annotate it
X
>> Good afternoon and welcome to the B. Thomas Golisano
College of Computing and Information Sciences.
My name is Andrew Sears, I'm the Dean of the College
and this is the Dean's Lecture Series.
The series started 10 years ago and it was designed
to bring people from industry and academia to campus
to share their thoughts and insights with our students,
our faculty and members of the community and today is the last
of our lectures this year.
The 47th overall of the 10 year stand.
At this time I'd just like to check, do we have anybody
in the room that needs an interpreter?
In that case our interpreter can.
>> Okay, great.
Well I'd like to thank our interpreter Jennifer Guess
for her services today.
And please join me in thanking her for her services.
>> Applause.
>> Okay, so we got that set up.
Okay. So it's my pleasure
to introduce our speaker for the day.
Mary Czerwinski.
I was just sitting there thinking of when we first met
and I think it might actually be in the category
of I don't really want to figure out how many years ago it was.
But it's been a while but we don't count the years.
But Mary is an ACM distinguished scientist
and her research spans a variety
of areas including emotion tracking,
information worker task management and a number
of other topics over the years.
And her PhD is in cognitive psychology from the University
or Indiana -- Indiana University Bloomfield, Bloomington.
I'll get it right there.
But today -- now I'll read from the script
so I will get it right.
She's been talking about emotion tracking,
from memory health and awareness.
And we'll be discussing a novel system that she
and her team developed in Microsoft.
So without further delay turn it over to Mary
and please give her a nice warm RIT welcome.
>> Applause.
>> [Inaudible]
>> Let's see if I can get this on.
Hi. It's a pleasure being here.
Thank you for that warm welcome.
We met in the early 1990's
and Andrew was a graduate student doing his dissertation
with Ben Schneiderman, so yeah I remember that well.
It's great to be here, I got to see snow,
I felt like doing angels.
I was so excited, I hadn't seen snow all year
so thank you for that as well.
Can you turn on the screen please?
Or do I have to do it again?
Okay all right.
Great. So obviously this is what I'm going
to be talking about today.
I have a whole host of colleagues,
my team is very multi-disciplinarian.
We have everyone from designers to interns
to machine learning specialists
to hardware engineers working on these projects.
So in all cases they did the lion's share
of the work, not me.
What I'd like to do today is I'll set forward the agenda,
I'll talk to you about why we're motivated
to look at this problem area.
We do try to pick problems that are human motivated,
we don't just do technology for technologies sake.
So that's something that sets our group apart in some ways.
I'll tell you about the first system we built
which was AftectAura and that was presented at CHI 2012
and then I'll move on to some work that's actually all still
in progress for the most part.
And these are really just beginning forays into this field
of emotion tracking for health and well-being.
I think we have a lot to learn as a community,
I'm super psyched to hear that you've got folks here
that are interested in this topic as well.
So hopefully we can all learn from each other
as we make mistakes, but also have some successes.
So why are we interested in this space?
Well, the market for wearables is just exploding.
How many of you wear a Fitbit?
Okay, one.
Oh, that's the most unusual.
I've spoken to three.
Okay. Well Fitbit is a little tracker, I've got one on today
to count your steps, your calories.
It looks at your sleep quality and stuff like that.
And since the Fitbit came out, the BodyMedia actually
in the lower right hand corner for you guys,
was actually the first.
But since these two devices have come
out there have been a myriad of them on the scenes.
And most of them are now bands,
even the Fitbit now has a band version.
Pretty wearable, pretty comfortable, not too geeky.
Some of them are kind of geeky.
And typically people pretty much hide.
So this Shine one is the one I like the best.
You can make it look like a button or a necklace.
So they all pretty much do the same thing.
They're all looking at -- they have accelerometers
and they're looking at the distance you travel
and then calculating how many calories you burn.
And people are very, very enthusiastic about these things.
There's a whole community around them.
Apparently Fitbit is one of the hottest selling items on Amazon.
And it's got a social network associated with it
so people are starting to get
into this quantified self movement.
And we're going to take that to another level.
So basically we believe that true health has
to include emotional health.
So, you know, the things I've shown on the last slide,
those are all about health and fitness.
But we don't think you can truly be fit unless you're also
emotionally fit.
And understanding your emotional habits is actually key
to living a healthier lifestyle meaning you reduce your stress
levels, your obesity levels which are directly correlated
with stress levels etc.
so beyond fitness we believe there is emotional fitness.
And I actually thought when we started this a couple
of years ago that there wouldn't be a lot of product transfer.
I thought that was pretty risky because it's not tied
to Microsoft's goals in any way.
Well, I come to find out that product teams are extremely
interested in this topic.
And if you think about it today, isn't it kind of silly
that your computing devices don't know anything
about your mood or when you get frustrated
or when you need to be cheered up.
Things like that.
So now that we can do it there's a ton
of product interest in this.
And for those of you doing game design,
just think if you could see your user's faces while they were
playing the games or hear their voices and kind
of gauge their excitement levels.
It would even help you iterate your designs, right?
So we decided to first focus on stress and anxiety
because I think everyone can realize
that this is pretty big problem for our society.
So we figured we'd tackle the biggest one first.
You know, 51 percent of obese people eat too much
because of stress.
I actually have our own data that kind of supports that.
We didn't get that high number but we got a pretty high number.
You get a three percent -- three times increase in hypertension
and 2.2 times increase
in cardiovascular mortality in high stress jobs.
That's pretty bad.
I told people earlier, I always make the joke, I read so much
about stress now it totally stresses me out.
And it's true.
I can feel now, because I'm so aware of it I can feel my stress
and it scares me because it's so bad for you.
And here's the thing, you know,
how many of you think you know how
to handle your stress level well?
Some people do.
I mean you can see that number though most people don't feel
like they do.
You know, and I used to have a slide
that showed people drinking and people doing yoga
and people working out.
I mean they have their ways --
people have their ways of handling stress
but not all of it's healthy.
And we have some literature that show.
There's a company called Stay Well Corporation in the UK
and they did a really big study, looking at 46,000 employees.
Companies like Microsoft.
And what they found, controlling for all kinds of things
like smoking, alcohol use etc, etc. They found
that stress was actually the most costly health risk.
You know, even beyond alcoholism, blood pressure,
high blood pressure, high blood pressure and stuff like that.
And the thing is, when your brain perceives stress,
you release chemicals through the body.
It's called the flight or fight syndrome
and basically it's a flush of adrenaline,
more adrenaline and cortisone.
And what this does,
it immediately increases your cholesterol levels
and there does seem to be,
although it's not super well understood, a connection
between cholesterol and cancer, blood, heart disease.
Because it's kind of like an inflammatory effect
on your body.
And so the other thing that's really bad
about chronic stress is that it actually gets in the way
of your making healthy decisions about your life.
So how often is it when you're stressed during a rush you just
grab for junk food to eat because you're hungry?
And it's because you're actually thinking
out of the dinosaur part or your brain, not the, you know,
the frontal cortex like you should for good decision making.
And actually now studies have shown that when you meditate,
do yoga, practice mindfulness, kinds of practices,
you actually can move out of that part of your brain
into the smarter areas from making decision makings.
For making good decisions.
So that's a really interesting thing.
So our hope was that we could somehow use technology
to intervene when we detected these kinds
of stress situations, especially in chronic stress situations
and maybe intervene to help the user get out of that mode,
out of fight or flight and into something more mindful.
And hopefully something healthy.
So I just said that.
So what this brings us to is the field of effective computing.
How many people know what effective computing is?
Okay. A few of you.
Um, so Roz Piccard started this field in 1990.
She first published a book called "Effective Computing"
and that coined the term.
And basically it's a bunch of computer scientists
and psychologists and others who like to look at things
like facial muscles, the porosity of your speech.,
perhaps what you're typing or saying, and many,
many people are looking at Twitter now
for instance to detect moods.
And then they use machine learning
to do classification of these signals.
These various physiological signals.
And we've gotten really good at it.
Like the machine learning parts that I'll talk about today,
those aren't even interesting or patentable.
They're just off the shelf these days.
In fact there are whole libraries
of them free for use out there.
What's innovative about what we're trying to do is actually
on the intervention side.
That's where the human computer interaction part comes in.
So all the stuff I'll talk about today
from the effective computing side is pretty a durreguere.
So how do we get ground truth?
Well in this effective computing field there's a model.
The circumflex model which is very well appreciated
and well used.
I don't actually think, and I could give a whole talk
about why I don't think this is the best model.
And we have some evidence that does support it to some degree
but shows there is another dimension in there.
And we don't really know what it is.
But basically this model is, you have your valance on the X axis
and it goes from negative to positive
and you have your arousal levels on the Y axis
and it goes from low to high.
And basically what you ask users to do is put themselves
in that two by two.
How many people think you could put yourself,
where your mood is right now in this two by two model?
Knowing what I just said,
negative to positive, low to high arousal.
You're all in the upper right hand right?
Because you're listening to my talk.
But I think what I was trying to point out,
if you just immediately try
to feel how you're feeling right now it's not easy right?
We're not use to reflecting on our emotional state.
And I've got some funny anecdotes about that actually,
when I start talking about the studies we did.
But this is about Brown's truth is.
We ask users to tell us where they are
in this two by two model.
So in the first system we built,
which I was talking about earlier.
We're presented at CHI 2012 and I don't have it running
on my laptop so I'm going to show you a video.
And I think you'll probably understand how it works.
But our goal here was to build a system
that actually helped you remember how you felt day
by day in the past.
So it's not a real-time system, it's a reflective tool.
And we built this system because we did a survey
at Microsoft asking people
if they could remember how they felt 24 hours ago,
if they could remember how they felt 48 hours ago.
If they could remember how they felt a week ago, a month ago.
And what we saw was there was a rapid decline.
People felt like they could remember how they felt yesterday
but not the day before yesterday.
So super rapid decline.
And they could remember pretty well what they were doing
but not how they felt.
So then we also asked on the survey
if people wanted technology
to help them remember how they felt.
And they definitely said they did.
And then we got lots
of interesting write-in comments at the end.
This is of about 300 at Microsoft.
There were a lot of people on medication
and they may have selected to take the survey,
in fact, we don't know.
But they said, you know, they take medication
and then they can't tell their doctor
if it actually is working or not.
Is the dosage right.
So we're pretty motivated to try to do something to help them.
So let me just show you what this system is like.
>> Affectaura, a visualization of a user's emotional state
over time for reflective purposes.
How long do you think you remember your mood over time?
We have found people's moods are quite volatile
and can change rapidly during a course of the day.
And the user's tend to forget these mood swings
pretty rapidly.
We developed AffectAura to visually assist users
in keeping track of their emotions over time.
We combine multiple streams of sensor data in order
to capture user's content and model their emotional state.
Let's explain.
Affectaura lays the day out on a timeline, including where the
user it, which we obtain through GPS.
Locations are shown as icons on the timeline.
The balloony shape colors indicate hourly mood patterns,
positive pink tones and negative in blue tones.
The size of the balloons indicates the user's
activity level.
When we show the kinds of documents
and websites users visiting at that time.
Smooth balloon shapes suggest
that the user is less engaged in the activity.
While bursty shapes indicate high levels
of engagement with the activity at hand.
The user can flip through dates by using the navigation terms
to either side of the timeline.
Let's take a look at Joe's Affectaura.
On Monday Joe starts the day off on a low or negative tone.
But then as he becomes more active he becomes more engaged
in the work he's doing and his mood picks up.
By the time he reaches the office, in fact,
he's in a better mood.
Let's compare that to Tuesday on Tuesday.
Tuesday starts out extremely busy
and he seems happy as he works.
However, after lunch it appears
as though Joe is working very hard but his mood has soured.
Affectaura allows Joe to take a look at what happened
to trigger this mood swing.
Looking over his day we can see that about 1:00 PM it appears
as though Joe has received a nasty email from a colleague.
This must have triggered a negative mood swing that seems
to have impacted the rest of his day.
AffectAura can be used to go back in time and reflect
on what patterns of behavior lead to happy or sad outcomes.
A user study showed that users found AffectAura to be useful
for reflecting on,
and remembering mood swings over time.
And what triggered them.
For future directions we intend to go mobile
and wearable in real-time.
We'd like to use fewer sensor streams, but we think we need to
add heartrate. Application areas include detecting emotional
states like happiness, stress, anxiety, frustration and
depression.
And we'd like to leverage fabric and social media
as outputs for these mood detections.
I think we can all agree
that emotions provide the spice of life.
But how well do you think you remember your emotional mood
swings over time?
>> Okay, so AffectAura was literally the first prosthetic,
automatic prosthetic for mood.
It had never been done before.
And I don't know if you say but we used a Kinect for lean-in
for posture lean-in and back.
We used GPS, we used everything you did on your computer.
We used just a regular webcam for facial changes.
We used speech prosity detection so you can tell
from the prositive ones speech whether
or not their stressed out or not.
I think that was -- oh, and we had an Affecttiva cue sensor
on them which is collecting their EDA,
their electro dermal activity.
Basically how sweaty they were,
which can give you signs of arousal.
So when you put all those things together
and we think this is the reason AffectAura was so accurate,
which it was about 70 percent accurate.
You actually bump up the chances
of accurately classifying someone's mood.
So [inaudible] the guy that was with me was
from Roz Picard's lab,
and he said this was the best he's ever seen in terms
of affect recognition or classification.
And users really like it.
We did track them for a week,
we didn't show them their AffectAuras,
we just collected surveys in the beginning
and the end of the day.
They were wearing all the sensors.
And obviously the Kinect wasn't --
the Kinect information was missing when they went away
from the desktop because, you know, we didn't have the webcam
and the Kinect cameras.
So we were missing data from some parts of the day.
But overall we were able
to pretty accurately classify their moods.
As I was telling people last night at dinner,
for really a big emotional events,
it's called the Von Restorff Effect,
users have excellent memory for things like that.
What this is really good for was
for those little micro currents throughout your day.
You know, users would look at it, we showed it to them
on Friday night when they were done with the week,
and they would say, oh, I always seem depressed
when I go to that meeting.
Or that person always really bums me out.
You know, and they would say, you know,
these aren't behavioral decisions I make,
to go to this meeting or to be with this person
and I don't have to do that, you know.
So it was the little things in life that make them happy
or made them sad, that they really appreciated being
reminded of.
And I do think it's cool to be reminded
of the happy things in your life.
If you're kind of a depressive person
that could actually help you.
Maybe classify your own mental state a little bit better.
People and email and calendar events were by far
and away the best cues for recall.
We would show them what their mood was but they had to go back
and figure out why, right?
And those three cues were clearly the best.
The logging, the file logging details actually really didn't
help very many people at all.
So we could probably get rid of that one.
And also, people recognized
that AffectAura was really helpful right now
but it would be even way more helpful later, you know,
three months from now or something like that.
I do want to say that there is some danger here.
If you haven't already figured it out,
that we would insert a false memory.
So if I show you that you were happy at 3:00 o'clock
but that didn't actually happen the machine classified it
incorrectly that might be disturbing.
That disturbs me that we could do that.
So I think you would want to make a system
like this extremely interactive with the user
so the user could correct it.
And maybe it could get better and better as it went.
that would be the idea anyway.
Any questions about AffectAura?
Yes?
>> I wonder about why people would want
to confirm what they were feeling historically?
>> Well, it was actually motivated by a woman in my group
who came home from work one week, on a Friday night,
said to her husband, oh, it's been such a horrible week.
And he looked at her and said, why?
He goes, you got the best paper award
at CHI, you got a promotion.
And she went, oh, yeah.
You know, people forget and they might be misclassifying their
own states.
Depressive people actually have a depressive lens on the world
and it might be real useful to remind them
that things aren't so bad.
The other motivation we had behind this,
other than helping people with their medications and stuff
like that, was actually to index our lives emotionally.
So if I am in a bad mood, I say to the system,
show me everything that makes me happy.
And the system can proffer up a bunch of wonderful events
that I can look at and it might put me in a good mood.
It's just another way of indexing information
and it might allow us to do very contextually relevant things.
So that's kind of the motivation.
Yes?
>>[Inaudible audience question]
>> Yes, so let me keep talking.
But yes, it does require a lot of computing but --
so you have to be smart about where you do that computing.
Some of it can be done on a SmartPhone,
some of it can be done on the Cloud.
All right?
Okay, let's move on.
So that was AffectAura.
We were super motivated by that.
The last summer I had four excellent interns come in
and we wanted to go, well, not wireless, but we wanted
to go, yeah, wireless.
And we wanted to do this in real time.
And this woman, Erin Carol was very motivated
because she called herself an emotional eater,
she was very motivated to work
on this emotional eating problem.
Now this was an extremely difficult problem
because not only does she have to come up with a system
that could automatically detect moments when you might reach
for that donut, but she also has to learn about emotions
that are associated with emotional eating
and then pick a good intervention.
And the science
around interventions actually is very nascent.
So there was a lot of work to be done there too.
So obviously she wasn't able to do all that in three months.
But I'll tell you what she did get done,
and this work is still ongoing.
So first she did mechanical church survey
about 300 people, and maybe it was 600 people,
I should have written that down.
And 36 percent of them said they are emotional eaters.
So you saw a number earlier,
51 percent of the obese people eat for stress.
Reasons we got emotional eating response of about 36 percent.
And 82 percent of those people said they would love technology
to help them.
And this is true of every emotional eater I meet.
Actually I haven't ever heard anyone say they wouldn't want
technology that could help them.
So we've run a lot of participants
in these studies now and that seems to be pretty universal.
So her approach was, she was going
to investigate eating behaviors in order to understand
when you need to intervene.
And then she was going to look at intervention types
which that part really didn't get done.
And then she was going to develop the technology
for implicit emotion detection.
So you could intervene at the right times.
Pretty ambitious.
So what we developed was the SmartPhone application called
Emo Tree.
I like that name, and Emo Tree has leaves on the tree for days
of the week that you've been running the software.
Every day you get a new leaf.
The greener the leaf is the healthier you ate.
The bigger the leaf is the more you ate.
So we made -- in these three little birds are supposed
to be three people from your social network who you've asked
and they've agreed to opt in to help intervene
if you want a social intervention.
And we kind of give you -- here you are, you're happy, overall,
is your happy mood today.
So for this point in time this person is pretty happy.
And what we did to collect ground truth data was we gave
them the two by two circumflex model as we've discussed,
and basically they -- the circle started in the middle
and they just had to drag it
to the place were they thought they currently were.
And we had 12 participants, two males,
because there are male emotional eaters, believe it or not.
And oh, what they had to fill out was a pretty simple diary.
So they just had to say what time it was
so we assumed it was the current time but they might go in
and put it in after the fact so they could fix that.
And then they said, was it healthy?
Was it unhealthy?
Was it too much, too little or just right?
And were they hungry when they ate.
Somewhat, very, not at all, stuffed.
And then they hit submit.
We also asked them
in the circumflex we asked them how engaged they were
with what they were doing right now.
Because sometimes you're super engaged in a task and you eat
for emotional reasons but you're actually like not stressed
or in a negative mood.
What we found, across these 12 people was that there were lot
of individual differences.
They were very good about entering their mood
when we experienced samples.
So that was good.
They were pretty good about that,
entering their food as well.
Six out of 12 participants were predominately stressed
when they ate.
So that's what they were self-rating
when they actually ate.
They were saying they were stressed out.
Very little eating occurred in the serene calm area.
There was eating in the happy areas.
So this is one participant,
the small dots are her emotional ratings throughout the day
and the large dots are the emotion rays associated
with eating.
And you can see the large dots are predominantly
on the negative side of the chart so whether
or not it's stressed or just depressed and bored,
that seems to be where this person was eating.
She was pretty representative.
There were people who ate when they were happy,
you can see this person did too.
A couple times or at least neutral.
We didn't ask, are you with anybody
and I think we should have because that might be
when happy eating occurs.
It's just a thought.
So in our next study we're going to ask.
And so since we saw that
and we hadn't built the intelligence system
yet because we didn't know if this was going to work.
We didn't want to go build the whole system map.
We actually ran a second part of the study
with these same 12 participants where we just chose to intervene
if they self-rated in this upper negative quadrant.
This is the stress quadrant, stressed, angry.
So they didn't know it,
it wasn't an actual automatic system.
We we're actually using their self-ratings.
But interestingly enough, there was a significant enough delay
between going up into the Cloud and coming back
that they didn't actually discover
that the system was just using their self-ratings.
They thought maybe they were using --
because I'll tell you in a bit.
We had sensors on them.
So ironically they didn't know this wasn't the system
and what our intervention was, was a little bird comes up
and says let's count to 10 and breathe slowly.
And so you have to top the screen 10 times
and you're supposed to breathe slowly.
Now, we don't know, I mean, they could only go so fast
and they couldn't get rid of the screen.
But we don't know if they were doing deep breathing.
But three of the users, three of the women said,
it was all women, said that it was such a good technique
for them that they actually incorporated it
into other parts of their lives.
So that was great.
Other people wanted things like games, they wanted a reminder
to go take a walk, a reminder to eat something healthy.
Some people said they wanted their social network intervene,
which we could have done but didn't do.
And so we have a study starting actually as soon as I get back
where we are now going to provide them with a menu.
We've actually got the automatic detection working
and we're going to provide them with a menu
when we just in time intervene.
And they can pick one of any
of these things including social network.
And then we'll see what the relative efficacy
of any of those options are.
Okay, so we had them wearing sensors, I said.
So we had this crazy idea
that we could build sensors into underwear.
And so we got a custom board made down in the hardware lab,
that's the little thing in the middle of the bra there.
And this worked really, really well for women
because we put conductive fabric in the sides of the bra.
Because we wanted to get heart rate and heart rate variability.
Because heart rate variability is a very good indicator
of stress and depression.
So it's when your heart rate variability goes
down that's bad.
So it means the intervals between the beats,
when that goes down, when it becomes really tight and steady,
that means you're in stress mode.
That means fight or flight has kicked in.
If it's all variable and all over the place, those intervals,
that means that you're actually happy or not stressed, or calm.
So we built these things into the bras.
We had the women wear them the whole time we were running this
study and even the men.
We tried to build it into their underwear but unfortunately
for boxers it's too far away from the heart.
We got a really crappy signal off them for that.
So we got to think of something better or they have
to wears bras, one or the other, we haven't decided.
But it was kind of funny to have these being made
out in the design lab.
You wouldn't believe how many men wanted to join our project.
This is what it kind of looked like.
The custom board was in the middle, we collected EDA
on the bottom of the breast and then EKG from the side.
As I said, there's another view of it.
So we got EKG, got back skin response,
movement to the accelerometers and the gyroscopes
and then we had the conductive soft sensor pads.
This is another view.
Three pads and this is how she kind of wired everything up.
This was the design or kind of doing some fashion.
She even used Scotch tape.
So a little bit of everything.
So as I say, the study is going start next week
where we actually get to look at which intervention,
we can have them personalize interventions to their liking.
We have even a bigger study that's motivated by this one
that we're going to do with hundreds of people
over the course of a month looking at interventions.
Because we don't believe a little study like this
with a small number
of participants doing automatic detection of your mood is going
to be enough to know what we should be doing
with the interventions.
We think we need hundreds of people of all different kinds
of personality types for a very long period of time to look
at the context they're in.
Because, you know, if the system detected
that I'm nervous right now do I want and intervention coming
in because of where I am?
No. Absolutely not.
Maybe it could tap me and tell me to calm down.
But the kind of intervention is really going to vary based
on where the user is and what context they're in
and maybe who they are.
So we do have the automatic detection working now.
That we did get the machine learning done on that.
That's all working great.
It's pretty reliable.
Now we just need to do the user studies on the end and look
at the intervention classes.
To see of we can come up with good policies for what kinds
of interventions to deliver when.
So, any questions about that?
Yes?
>> I'm just wondering, like with the food diary.
Is it intended like whenever the user eats or just
for like snacks and stuff?
Because I can imagine that like the data can be construed
like if you filled it out like when you eat dinner.
Like if you eat dinner at a specific time like regardless
of what mood you're in.
Like it would influence
like when you would need and intervention?
>> Yes, well, obviously the time
of day would be something we would be looking
at as we collect this data.
Because we have a feeling that late at night is
when we're going to see the emotional eating,
but not always.
But sometimes during the day my guess is it's not around a meal.
Or it could be around a skipped meal.
So that's what we're presuming we're going to see.
We don't know right now.
We didn't ask enough information in the first study
about who they were with, why they ate the way they did.
So we're going to have to ask a few more questions to get
at that kind of issue.
Yeah. All the users in our study, this is the problem
with doing studies in the wild, right?
All the users in our study said they ate better
because they knew we were monitoring them so there is
that effect that, you know that effect of being observed.
That does kind of ruin your data.
But we're hoping that with longer periods
of time we're going to see more realistic behaviors
and probably realistic rejections of some
of these interventions we come up with.
Yes?
>> [Inaudible audience question]
>> No, but I think that all the time.
So we just did a 35 person study in building 99
where all the MSR is and we were basically experienced sampling
them every 15 minutes.
It was random but it was pretty much that often.
And I know for a fact we made them more stressed out.
The only thing that made it better is we paid them $250.00.
So they actually usually replied.
Yes?
>> [Inaudible audience question]
>> So we didn't -- we made it as brain dead as possible.
We could have asked them
to minutely enter every single ingredient and what not
like some of these food diaries do.
But they are very tedious.
And we knew -- I didn't even tell you the funny story
about these bras.
We knew these women we're already basically laying
themselves on a knife for us.
Right? Because those bras had --
you had to take them off every four hours
and charge the batteries.
So they would wear them from 8:00 to noon, take them off,
run to the bathroom with their other bra, take them off,
plug them in and then after lunch they'd put them back on,
you know, and then before dinner they had to take them off,
plug in the batteries.
They were working so hard for us that we made the diary part
of it, like so simple.
But, you know, we could have plugged into some
of the software that's out there like Fitbit and stuff
that gets more particular about what it is you ate.
>> [Inaudible audience question]
>> Oh, yeah, yeah, yeah.
Hydroxicut, all that stuff.
Yeah. And maybe a level of protein in what you ate,
stuff like that, yeah, we could get really -- yeah.
Well, you know what's interesting,
one of my friends coaches senior executives at Microsoft and just
to get to this point she's telling them what stress is
doing to their bodies and they all want to lose weight
so they all want my app.
And they are so busy, like to fill
out a little thing there is just now way.
But maybe filling out something like a hydration level, yeah,
maybe something little like that could still be doable.
That's a good idea.
Thank you for your suggestion.
Okay. So let's talk about the second application.
This one I really love.
This is near and dear to my heart.
About three years ago we built a system called Honest Signals.
And the idea of the system was, it was built into Skype
and it was to look at you when you were
in a video conference call with someone else, it was to look
at your face and your speech prosody and your movement
and your nodding and your smiling and it was supposed
to say on how much you're talking, this was supposed
to give you feedback about how you're doing
in that conference call.
And the other person is also getting feedback
about how they're doing in that conference call.
And so when we built it, I really loved it because,
you know, in a high tech culture it's like dog eat dog grr
and you know, Microsoft people can be like grr,
and the first thing
that happened was I started putting Microsofties in front
of these things and, you know, gave them a task.
Like convince the other person of X. And they would look
at the signals and they'd say things like, oh,
I see I've been talking 75 percent of the time.
I should let you talk.
And I'm thinking, oh my God.
You know, it worked.
So they saw these signals about how agreeable they were being,
how in charge of the conversation they were,
how much time was spent talking.
How much they were talking over the other person,
which was very interesting feedback.
And so we tested this and people thought it was interesting,
they weren't really sure what to do with it
but they actually liked it for going back
after the conversation to see how they did.
Some people said, oh, that would be great
for practicing job talks and stuff like that.
Or for looking at salary negotiations, how did I do
and that kind of thing.
And it was born from a guy at MIT, Sandy Pentland,
who said he was using sociometers, these badges
that people would wear.
He could predict in a dorm, if everybody was wearing them,
he could predict 97 percent accurately
who was going to date whom.
And he could predict with 90 percent accuracy
if you would get the job when you were interviewed.
So we thought, can you build
that into video conferencing and we did.
And that was three years ago.
Then I was giving a talk at University of Washington
and there was this woman, Wanda Platt, Pratt, sorry,
who was in the Eye school and she does medical informatics
and she came up to me after my talk and she said,
that honest signal stuff, have you thought
about doing it with doctors?
And is said, no, it never occurred to me.
And she said, well, doctors, it's really hard to train them
in medical school to be empathetic.
And the clinical empathy is actually associated
with more positive patient outcomes so they adhere
to their medicine better.
They just feel better overall, they trust their doctor more,
they have less anxiety and they actually have fewer
complications when their doctor is empathic.
So she had the idea that we could do this and it's very,
very hard to train in medical school
and it's very hard to measure.
So basically what they do, they get human coders who are really,
really good at coding for these signals and they stand there
and they watch like a half of your session with a patient
in the clinic, and they make their measurements, all on paper
and pencil and then at the end of that session you get feedback
about how empathic you were.
That's it.
That's what you get in medical school.
So given all that and given how expensive doctor's time is --
oh and the other thing that they get
to measure empathy is self-report.
Both the physicians but also of the patients.
So we were thinking, what if we could do it automatically
and we could give them feedback either in real time
or like AffectAura we could actually do it
as a reflective tool.
Either one of those.
So before I was telling you -- I was telling some of the students
at lunch time, how many people know what the Wizard
of Oz technique is?
You do. So we call it the WOZ technique.
So it's when you build a system -- I'm sorry.
You build a user interface
but there's no architecture underneath it.
There's no system really running it.
In fact it's just someone behind a curtain moving the dials.
So the user of this system thinks
that it's working intelligently
but it's actually just someone faking it behind the mirror.
So that's what we decided to do with this
because to build this system was actually going
to be quite difficult.
There was a lot of machine learning and a lot of features
that we had to process.
So that's what Rupa did.
We took the Honest Signal's which we proved worked
and we tried to map it to something they were using
in the clinical empathy world and they also have a two
by two model called the interpersonal circumflex
and that looks at affiliation and control.
So affiliation can basically be described as how warm
or cold you're being to the patient
and control means are you being really dominant,
are you talking all the really loudly, leaning over them
like that in a glaring kind of way.
Or are you sitting back and letting them do all the talking
and just being submissive?
So those were the two dimensions we needed to code for.
So Rupa basically was sitting behind a one-way mirror
back here, we brought in 16 health care professionals
and we can do that at Microsoft
because we have a recruiting team that has a data base
of like a million people who volunteered to come
in for studies at Microsoft and we can screen
on all kinds of attributes.
We actually got some doctors,
we got some emergency technicians, we got nurses.
So we got a bunch of health care professionals
and what they do in medical school.
It's kind of interesting, they hire seasoned actors
and actresses who are trained to do clinical acting.
And so they're trained to be patients, basically.
And we gave her a story from out of the medical community
that one of these situations
for training doctors would have actually been used, a scenario.
She was actually very, very -- she was supposed to be very,
very sick and then a very low socioeconomically class.
So she had all these problems, her car breaking down,
she didn't have heat, stuff like that.
So very sick.
And the idea here was the health care professional was supposed
to mentor her.
So they weren't, you know, trying to be a doctor per Se
but they we're suppose to be very empathetic.
Right? And so we did this
and behind the mirror Rupa was moving the software.
Okay? She is moving the interface.
And the health care professional is told, here is the display
that shows them whether or not they're being empathic
and I'll describe that in a minute.
But the health care professional is told you can look
at the display if you want to.
It will give you feedback but if you don't want to, don't,
and let us know how that goes.
So the first tool I designed is kind of weird.
Our first attempt, let me show it to you.
>> {inaudible sound from video}
>> Okay, so that was the first effort.
We actually brought these health care professionals in
and asked them to rate that display.
And while they did say that it was helpful and informative,
she didn't label this access, but this is the number of no,
this is -- sorry, this is a scale
of very not at all to very.
The answer is helpful and informative and even interesting
but you can see it was also confusing and distracting.
So that we not our design goal to be that.
So we went to a new design and I see I'm running out of time
so I'm actually going to just jump to videos at this point
and finish up real quick.
>> [Inaudible sound from video]
>> Okay and then I'm going to show you our last design
because the health care professionals told us
that blue coloring was wrong as you can imagine.
>> [Inaudible sound from video]
>> Okay, so that design they actually liked a lot
and of the 16 health care professionals only 15
of the 16 said they would actually use it
in a clinical setting.
One health care professional, he was a doctor, actually told us,
oh, I look at the clock all the time anyway so you can put it
at the back of the clock.
Which I thought was kind of -- anyway.
So we've done a couple other things.
I want to wrap up here.
We've done some things in this area
that have been kind of whimsical.
Some things that we've tried to be useful but we've tried
to have external actuated based on your emotion devices
and I'm just going to show you a couple videos,
hopefully this one will work, I'm having trouble
with the embedded ones here.
I'll show you just two videos, examples,
of this other kind of work.
So the first one is mood wings.
>> How accurate are we
in recognizing when we are stressed?
Does knowing we are stressed help us calm down faster?
If someone told us that we were starting
to feel stressed could we use that information to deflect
or prevent an acute stress episode?
We present [inaudible] a real time biofeedback system
in which a wearable butterfly with [inaudible]
through real actuation.
[Inaudible] that often stress [inaudible]
that most people who engage in data.
We thought that providing users with new links
and stimulated driving environment would help them
manage their stress levels more effectively.
Well [inaudible] more safely
at [inaudible] they experience higher stress levels,
physiologically and self-perceived.
Despite this users were enthusiastic
about [inaudible] expressing several alternative contexts
in which they would find it useful.
>> Okay, so mood wings is one that's interesting.
We put them in kind of a funny situation
so we make the driving simulator harder and harder and harder
to the point where as they went people were walking out in front
of you on roads, trucks were slamming on their brakes
in front of you, it was really bad.
So we never actually gave users a time to relax
by using the wings and the wings just kept flapping harder
and harder which really stressed them out
and they were already stressed out.
So they told us, you know, yes, I was aware of my stress levels,
you got stuff that really reminded me I was stressed out.
Please don't do that anymore.
But believe it or not, the actually drove better.
So we're doing studies now looking at --
we have a dragonfly now and a couple other things.
Looking at can we use it to calm the user down.
And that's what I'm going to leave you with, the last piece.
This was a piece of art that was meant to try to calm you down.
After it got angry.
So this is done by architect intern.
[ music ]
she actually sewed this thing herself.
[ music ]
you would never use heat-induce nininol wire and I'll tell you why.
[ music ]
so this is the fabric getting angry, stressed.
[ music ]
can you tell it's squishing?
It's very slow.
[ music ]
and now it's going to relax.
I guess she didn't show the relax part.
Okay, I'm going to stop it there but I will tell you the story
about the user study, was very funny.
So, you know, what we did was we had users sit
down for five minutes they had to write about a very recent
but very stress inducing event.
And so all they did was write, write, write.
They were supposed to remember how to spell,
what they were wearing, who was with them,
what the date, time was.
Everything they could remember to get back in that mood.
And then they separated themselves using circumflex
model and they were all in the upper left quadrant,
every single one of them, it worked.
And then we had them turn around and they got to see
that fabric get really nasty [inaudible]
and it was really great, because that [inaudible] gets really hot
so you had to turn on a huge fan or it would have exploded
in the middle of the lab, right?
And so not only gets like tight and taut
but actually gets really loud and then all of a sudden starts
to shine down some red light so it looked angry.
And their sitting in there like,
and they're all completely stressed out
and then we relaxed it for 180 seconds
and then they all [inaudible] and moved to the right
so they calmed down by watching the fabric.
So we believe that these actuated devices actually could
be used to calm you down when you're in a stressful state
and we're going to continue to work on that
to see what really works as an intervention for users.
Whether it's something you wear on you like a ring
or whatever it could be.
So the next time you hear me come talk I will have hopefully
lots of interventions that actually work.
Thank you very much for your attention.
>> Applause.
>> Did anyone have any questions for Dr. Czerwinski?
>> Check, check.
>> [Inaudible]
>> If we're interested in learning more about this,
do you have a research blog or any papers
that we could look up?
>> Sure, absolutely.
In fact most of them are under review right now,
that's how recent this work is.
So we're submitting a bunch to Effective Computing,
Pervasive Health, Creativity and Cognition.
So we'll find out very shortly if this paper gets in.
And as soon as they do I'll have them on our website for sure
and the videos as well.
Thank you.
You are good about asking questions as we went.
Yes?
>> I have a question about the design
for the biofeedback and [inaudible]
>> Okay. Anybody else?
Well, thanks a lot for your attention,
great questions all along.
I really appreciate it.
>> Applause.