Tip:
Highlight text to annotate it
X
(Dr. Melanie Fried-Oken, Oregon Health & Science University) -- Janice and I were charged by the RERC
to talk about language and learning and cognitive science considerations in the design of AAC technologies
for children and adults.
Our goals today are about language learning and language use, and we have three things that we would like you to leave with.
First is a challenge to AAC stake holders with issues that will affect the future of the field.
Second is to introduce concepts and questions from related research fields
that we must examine as AAC stakeholders.
And third, is to question the status quo and propose additional directions for AAC research. We have a lot of common ground.
We entered this with Janice representing research that was done in the area of language learning and children,
and I entered the task as a researcher in adults with acquired or developmental disabilities,
and we had colleagues that said, "Well, I'm not sure that the two go together --
Maybe we should have separate lectures or separate strands of these two areas of research."
And we decided we have common ground, and we're going to address them today.
So, language learning and use is a challenge for all people with complex communication needs --
for young children with developmental disabilities, as well as for adults with acquired or chronic disabilities.
So, for children we're going to address language acquisition and development, and for
adults we're going to look at language recovery, language loss, or language degeneration.
And many of the challenges that we're going to look at are similar for the different groups of people
with complex communication needs.
So what do we know? Where are we starting?
We're starting with AAC having a positive effect on communication, language, and literacy outcomes.
We know already that with communication support, we see gains in pragmatics (so turn taking, and requesting and commenting).
We see gains in: linguistic skills of receptive and expressive vocabulary; in semantics, syntax (so message complexity);
in intelligibility; and in participation.
We see that when we provide communication supports, it opens someone's social network
so that they have more communication partners,
that -- and we know now that by offering communication supports, we do not have a risk to speech or language development,
but we have supports to improve speech and language development.
And that's often a question that both families of children and adults have, isn't it?
If we give a child a device, that's going to stop them from wanting to speak.
If I give my sister who had a stroke a communication board to help her play bridge, she's never going to want to speak again.
And we know that.
We have data that shows that that's not the case. And we see gains with communication support across ages,
different kinds of disabilities, and in different environments.
So what we're going to do today, is we're going to take some of those things we know and we're going to look at cognate fields that,
in general, need to be brought in to augmentative communication.
We're going to look at three specific challenges from cognate fields. We're going to look at language development, and
quantitative and qualitative shifts across language stages.
We're going to look at working memory and the dual task demands that our technology offers, and we're going to look at the demands of visual processing offers, and we're going to
look at the demands of visual processing.
Dr. Janice Light (Penn State University) -- Melanie and I are going to be jumping up and down a little bit.
And Melanie did a really nice job, I think, of celebrating the incredible progress we have made in this field.
We have made tremendous gains in terms of providing communication access to individuals who
previously weren't able to participate and communicate.
But I think that the challenge Melanie and I are going to propose to the field this morning is that we can do better.
That we still have individuals who don't have easy access to communication and to participation, and that many individuals
are not doing as much as they could do if we approached this whole process in, perhaps, a more empirically based,
research-based manner.
So we have been doing things the same way for a long time out of habit, and I think we're
going to challenge the field to shake things up a little bit and to think about things in a new way.
So one of the tremendous challenges that we have is that language skills are always changing, regardless of who we are
working with. For children, as they learn new skills, they go through many changes, both quantitative and qualitative changes.
Some adults who have had acquired disabilities may have had a sudden loss of language skills and then experience some recovery.
Some adults who have acquired disabilities may be experiencing gradual loss of language function over time.
And in both cases, change is involved.
And because of those changes in language competence levels, either due to development or to adult language impairments,
we really are faced with needing to be able to ideally make seamless changes within AAC technologies over time
to really allow people to transition through these stages.
Those changes may involve changes in vocabulary, changes in the representation of that vocabulary,
changes in the organization of that vocabulary within AAC technologies, changes in layout, selection, output, etcetera.
So what does the research on child language development suggest to us?
Well, first of all, we know that children's language systems differ dramatically from those of adults.
And that really becomes a problem if we're designing AAC technologies that are based on the way adults think about
language rather than the way children do.
And we know as well that children's language systems change significantly over time as they develop and acquire new skills.
And those changes -- and this is the complicated part -- are not just quantitative changes in terms of being able to do more,
but in fact are also qualitative changes in terms of changing how they think about the world and how they interact with
and process information. So, then, if you know, for example, the sequence of language development, where kids at birth are preintentional,
largely reflexive and not communicating intentionally,
and they progress rapidly over the first five years of their lives, becoming intentional, but not yet symbolic,
going through the early symbolic or first-word stage, gradually learning to combine concepts, and communicate much more
complex meanings,
and then moving into the meta linguistic stage of being able to talk about and analyze language and the
beginning of the development of literacy skills.
And in each of these stages, kids process information and conceptualize language in very different ways.
So how do we go about understanding some of these qualitative differences?
Well, in a recent research project, we asked ourselves how do young kids think about and represent early emerging
language concepts?
What are their conceptualizations, and how do they compare to adult conceptualizations, the ones that we have
typically used in traditional AAC symbols?
So, the study looked at 60 preschool children that came from different cultural backgrounds.
The kids were first asked to draw 10 early emerging abstract language concepts, so: draw 'want', draw 'who',
draw 'big', draw 'all done', draw 'come'.
And then the kids were asked, later, to name AAC symbols for those concepts. And we happened to use PCS
symbols for this study,
but I'm very confident that similar results would have occurred if we'd used SymbolStix or Dynasyms , or whatever
representations are traditionally out there.
Here, for example, is the PCS for 'come' which some of you will recognize. Very few of the children understood that as the symbol for 'come'.
Many of them thought that is was a pointer finger; a boo boo, of course, if you're a preschooler;
and a lot of kids tried to make sense of these abstract things like arrows that we include.
So, one child told us that it was a hand with two driveways, of course -- nothing to do with 'come'.
What do the kids -- how do the kids think about 'come' instead? They almost all drew the same thing which was, in fact,
a person with someone approaching them, or just seeing them, and obviously this joy in terms of their arrival.
The PCS for 'big', again, the adult representation of that and the conceptualization of it as a relative size concept --
none of kids understood this as 'big'. They thought it was ants, sludge, colorings (scribbling on a page),
a blacktop for playing basketball, chocolate, or germs. OK? But not 'big'. The children almost all drew the same thing.
They conceptualized 'big' as a concept that denoted power or capability as represented, for example, by this little boy,
who was 5, who drew himself as "I am big. I'm five now." And you see his little brother under his foot at the right.
That's 'bigness'. Here's 'want', for example -- the traditional approach to 'want'. Very few of the kids -- one child -- understood
this. Others thought it was a teepee, cut-off hands, the Texas Chainsaw *** in effect here -- or hands and soap, but not 'want'.
And the kids, of course, this is definitely 'want'. Who' -- again, an adult representation of question about a person --
kids not understanding that metalinguistic analysis of the concept. None of them understood that. Instead they thought it was the back
of a head, a boy eating spaghetti -- slurping it in -- a haircut (you know, you do the designs in your hair), or a seven with ears,
of course, but not 'who'. The kids almost always drew the same thing. In this case the little girl that drew this is in green.
and you can see her pointing to this rather strange character with spiky hair in the corner and she's...
she says to her mom, and her dad -- her mom -- says, "Well that's your new dad, -- having just got through a recent divorce.
But that's the concept. These are just a few examples. But what we do see is that these preschoolers, these kids,
are approaching their conceptualizations of language and their representations in very different ways, that they represent very
different meanings. They depict entire scenes in their representation and they embed the concepts within
those scenes. They include complete objects or people.
They do not use parts where you have to infer the person and the intent behind them.
And typically, they are embedded in familiar people, objects or experiences and, interestingly enough to us, although we had six
very different cultural groups,
there was tremendous consistency across the representations that the kids drew, regardless of cultural background.
So, in conclusion with this part, kids obviously think about the world in ways that are quantitatively and qualitatively different than adults.
Your typical adults, like ourselves, tend to think about concepts and define them based on our semantic memories
-- sort of a dictionary definition of the concept.
Young kids learn language through their experiences, and they're much more apt to draw on those experiences and use their episodic
memories to define concepts.
So these are some of the challenges to us and how do we accommodate these changes over time?
So, I'm going to turn it over to Melanie who is going to do the adult piece and I'll be back in a couple minutes.
Dr. Melanie Fried-Oken (Oregon Health & Science University) -- So, we have similar challenges with adults.
A lot of the work that we're going to talk about now comes from our RERC partners in Nebraska.
We can ask, for adults, how do adults with acquired language impairments represent language concepts? And we'll talk about
different groups. So the first group we're going to talk about are adults with traumatic brain injury --
so, those who have been in some type of accident and are losing language. Well, surprise!
Alphabet and orthography is very over learned in the adult population, and when we approach augmentative communication
for adults with TBI,
the research says that we should be using orthography instead of pictures and symbols for our language representation system.
So, asking that question, we have an answer. For the adult who was losing language because of a progressive aphasia,
a relatively new diagnosis of about 15 years, and those of us at adult AAC clinics are seeing a lot more adults with progressive
aphasia. They retain single-word reading during the loss of word-retrieval skills. So, we know from our research now,
that if we're going to present language in technologies for adults with progressive aphasia, that we should present single words and phrases
to them. If you're interested in seeing some of the examples of the use of single words and phrases --
you know how the RERC has wonderful web casts on our website -- I've done a web cast on progressive aphasia and AAC,
and many of our subjects have given us permission to include their videos of conversations with and without technology.
So, you can look at those. But we know that by including single words and phrases in communication boards,
that we're seeing a significant increase in the use of specific target words during conversations with researchers
and with their spouses or familiar communication partners.
For the adult who has a chronic aphasia, we know from the Nebraska group that there are many different ways to present
information and that personally relevant and contextualized photographs produce the most, the best, language outcomes for adults with chronic
aphasia. And what I've done here is given you the top picture there, it's Fort McHenry, where I went yesterday, in Baltimore.
And that is a personally relevant, contextualized photograph for this family. Below that is a Baltimore Oriole.
That is neither contextualized nor personally relevant -- unless you happen to be a baseball player.
The third one is the harbor out here, and that is a contextualized photo, but not personally relevant at all.
So, if we were to compare -- which has been done -- if we were to compare which photos are to produce the most appropriate
language in a functional situation for adults with static aphasia, we see that the top photo should be included in their AAC technologies.
So, if we move a little bit from, What vocabulary representation should we use? and now we move to another research question
for qualitative differences
and that's, How do we show the vocabulary in our AAC technologies? We can ask, How do we effectively map
the internal language system of individuals with complex communication needs to the external AAC technologies?
So, I'll give a little example for adult research, and then Janice will come up and talk to you about some of the child stuff.
So, what does research tell us about adult AAC users? And this is, again, from our partners in Nebraska,
that for adults with chronic aphasia, the layout does affect performance. For locating symbols in our AAC technologies, having a navigation,
multi-level system -- so you see the top right pictures that form an upside down 'L', that's called a navigation bar and visual screen displays,
and having that navigation bar appears to increase efficiency which is great responding and symbol accuracy when you're look --
compared to the traditional grid of just going level by level and not having that upside down 'L' on every picture, on every screen.
So, when you have the bottom grid, you press one picture and it takes you to another level, and you can press another picture
and go to another level, and there's no representation of all the different levels on each individual screen.
The top screen has representations of all the different levels on every level, and that appears to help individuals with aphasia
for the speed of communication and symbol accuracy.
Dr. Janice Light (Penn State University) -- So, Melanie has talked a little about how display of the --
the type of display affects performance, the type of layout for adults.
We find the same type of thing occurring with young children as well.
And on the right hand side are two different examples of how we might display information
in an AAC system language for children who require AAC.
On the bottom one is a traditional AAC display as we have often used -- a grid layout.
And on the top is a visual scene display where the language is much more embedded in context.
In the top one is Lilly playing telephone with her mom.
And if she was wanting to communicate about the telephone she would touch the phone and it would retrieve the speech output 'phone.
If she wanted to talk about 'mommy', she would touch 'mommy'.
If she wanted to say 'hello', she might touch the area around her mouth.
In research from our lab at Penn State, and I'm incredibly fortunate to work with
two wonderful colleagues who are both here today, Krista Wilkinson and Kathryn Drager, so,
huge thanks to them for their input on this research.
But Krista and I are just in progress with this study that looks at infant performance with these different types of layout,
and we're finding that infants show much greater attention and interest in the photo visual scene displays than in traditional grid
displays. I'm suggesting that those may be better fits for very young children.
Similar work that Kathy's taking the lead on looks at toddlers' performance, and, again,
we found that they're much more accurate locating vocabulary using visual scene displays than using grid displays.
As kids get older, four and five, we start to see those emergent metalinguistic skills.
We see that at the stage kids in fact are becoming able to handle grid displays, but,
really, those grid displays are quite complex in terms of handling them, and you require many more linguistic and metalinguistic
skills to do so. And, although they do equally well with VSDs and grids at this stage,
they still struggle with more complex types of displays like iconic encoding.
So, some of the challenges...and Melanie and I were extremely selfish when we did this session
that we were throwing out all the questions that we don't have answers to right now,
so how do we design AC technologies that accommodate these qualitative and quantitative shifts that occur either with the child's
language development over time,
their language learning, or alternatively with an adult's language recovery, their loss, their degeneration, whatever way it's gone?
How do we know when to adjust features to accommodate those changes?
How do we map this internal system that you can't see, and even come to terms with what that internal language system looks like?
How do we map it onto the external AAC technologies?
And, in fact, then, can we draw on the technologies themselves to support or scaffold these transitions developmentally, or support
loss over time?
So, those are some of the questions we hope we'll be dealing with down the road -- or challenged to.
Dr. Melanie Fried-Oken (Oregon Health & Science University) -- OK, now we're getting to another challenge and this is in working
memory. I'm going to tell you what working memory is. So, successful use of AAC technology really requires significant working memory demands.
So, what is working memory? Working memory is the ability to hold in mind and mentally manipulate information over short periods of
time. It's storage and processing functions that are activate at any given moment.
It involves attention, concentration, sequencing skills, and it involves motor and sensory skills.
Here's some examples of working memory for you and me: remembering a new telephone number when we're trying to find a
pen and paper to write it down;
driving and trying to follow directions that we were just given: "Go left and then take a right and then go left at the store."
So, we have to hold in memory the new information while also remembering other things and performing.
For children, remembering a sentence that the teacher says to write down,
and now we have to remember how to spell each word and use your best handwriting.
Or measuring and combining ingredients when you just read the recipe and you're not looking at the page any more.
So, it's a form of multi-tasking, storage and processing at the same time.
OK, working memory for people who use AAC: Learning the name of a new toy and trying to find its symbol on a grid with 10 buttons.
So, you have to hold the name in memory and keep looking; answering a question on a history test with your auditory scanning
system;
answering a question about recent medical procedures by using eye gaze to navigate through screens to find the correct button.
Does everybody see the struggle we're having here with working memory? Your bus driver's lost -- this is a true one.
Your bus diver's lost.
You need to give directions to her about where you live by finding the sequence of messages to hit on your speech-generating
device. So, all of these require significant working memory that we have not dealt with in our field at all.
In fact, here's a beautiful graph that shows as the task gets more demanding, and we require lots more resources, working memory
degrades.
All of our AAC technologies tax working memory and require considerable resources during language formulation for both
children and adults. For adults, we know that when you present a dual task demand,
or ask adults to follow a circle and we tell a story at the same time, language degraded in adults.
We know that working memory in general sees age-related changes in elders,
that older adults cannot store as much, or process as much information because of changes. What do we know about research in working memory in children?
We know that working memory capacity increases with age until adolescence. My husband says it's all down hill after that.
We know that adult capacities are more than double that of a four-year-old child.
And for the child that's using auditory scanning, and we're trying to teach them when they're five years old, huge working memory
demands. And we know that working memory is impaired in children with developmental disabilities in general and will not
generally reach typical adult levels.
So here are some -- I found a wonderful intervention strategy from the Center for Working Memory and Learning from the
University of York. So, on one side are the working treatment -- the working memory intervention they propose in the classroom,
and on the other side are some questions and directions I think we need to go with working memory.
So, they say: "Evaluate the working demands of the learning activity." We need to evaluate the working memory demands of the
AAC technologies;
They say: "Reduce working memory loads if necessary." We say: "Reduce the operational and cognitive demands of the AAC
technologies."
For treatment with children, you're supposed to reduce processing demands. We need to figure out how to reduce some of the
processing demands of our technologies.
"Frequently repeat information." We need to provide opportunities for repeated device operations.
"Use memory games." We need to provide some automatic processes for language generation and develop memory-relieving
strategies. We need to figure out how to organize language for the user's strengths.
So, just some really simple ideas -- examples of where to get started with working memory.
So, how do we design AAC technologies to lessen the working memory demands? And how can we optimize device learning
with working memory challenges?
Dr. Janice Light (Penn State University) -- So one of the things that potentially we can do is
to try to find ways to reduce the load generally of using AAC systems and, therefore,
increase the capacity of working memory to deal with other aspects of communication.
One area that may be very fruitful for us to consider is the area of visual cognitive processing,
and we're really indebted to Krista Wilkinson who is here today, and who has really been a pioneer in the AAC field of bringing a lot of
this research and literature to the field.
But she's argued with her colleagues that any use of AAC systems typically relies on visual... the visual modality
and, therefore, the effectiveness will depend at least, in part, on the effectiveness and efficiency with which the information in the AAC
display can be perceived, identified and extracted
by not only the communicators but also by their partners.
Any way that we can reduce that load is going to increase the capacity that we have for other aspects of communication
and language. So, we really need to understand the visual and cognitive processing demands.
There are different ways to display information visually: a visual scene on the top; a grid on the bottom;
and many, many other possibilities that we haven't even considered in this field.
And those different types of displays can pose very different visual cognitive processing demands.
and by setting up those displays, we can either support or impede the general communication of the individual, depending on the fit
with their processing.
So, what does the visual cognitive research suggest generally?
Well, that literature suggests that individuals process naturalistic scenes very rapidly at a speed of 200 milliseconds or less.
That's what we do every second of every day that we're alive is process the scenes such as the one out there.
They recognize that the overall context, and also the elements within the first glance,
that context really helps to limit how much you need to think about on the discrimination of the objects in the scene.
Those scenes definitely exploit real-world experiences and, therefore, support recognition and activation of experience-based
schemas.
In contrast, performance with isolated symbols like shapes in some kinds of arrays is much worse than performance with scenes,
even though on the surface it appears that those displays are simpler in that there's less information in them -- less elements.
So one of the questions we've been asking in current work that's in progress is,
What is the effect of these different types of displays on the visual attention of infants and beginning communicators?
And we've had a split screen presentation with a photo VSD on one side and a grid display on the other.
The positions are counter-balanced, and we've used eye-tracking technology to measure visual attention and interest. And here are
the results preliminarily.
They show that infants look first and longest are definitely drawn to those photo VSDs compared to the traditional grid displays,
and show a strong preference with the bar on the right being the amount of time looked at the visual scene display
compared to the grid on the left.
What elements attract visual attention? Here's an example of the viewing patterns.
On the left hand side is an example of a visual scene display that would be set up for eating, for example, for a very young child.
And on the right, the light areas show where the participant is looking most, and the dark areas where they're not looking.
And I'm assuming you can see from this that it is the people and the faces,
and the main activity of eating that are drawing the attention of the individual.
Another example of that scene or activity, and again we see that the focus of visual attention is very much drawn to the people
and the duck or the play activity.
What is the effect then? Clearly, humans in scenes are very powerful attracters of attention and so,
Krista, recently in her lab, has just finished up a study that's looked specifically at the effect of people in visual scenes
on visual attention and processing.
And here's an example of what we've found.
In this case we have a visual scene of Christmas: the Christmas tree, the child, a dog in the front and a cat.
You can ignore the red lines which are just dividing up the space, but the yellow lines show the track of the individual's viewing
patterns, and the circles show how much time they spent in various areas.
And if you look at the distribution of time on the upper left hand corner, you see the big red area
which is the time spent directed at looking at the people within the scene. The other, then, the dog, is the green, another
animate object, and the others taking some attention but not nearly as much.
And, in terms of the fixation sequence, you see again on the right hand side that it's the people that attract attention first.
So, attention's drawn to humans and is seen more rapidly and for longer than any other elements.
And those results are robust across scenes, even when the humans are very small, and even when there's significantly
competing elements --
alright, I told her I would studiously ignore her regardless of what she did, and now she is unruly.
David, would you do a behavioral intervention here? We're almost done. So, what are the challenges?
How do we design AAC displays? How do we use this research on visual cognitive processing to inform what we do with displays,
and to minimize those visual cognitive processing demands and maximize the effectiveness and efficiency of performance
so that the resources can go toward the communication and participation, not towards dealing with the visual presentation of the
information?
And how can we, in a positive way, exploit those processes and preferences
to engage individuals who may otherwise be difficult to engage with technology and with partners in the communication process?
So, there are a few cautions. We've gone outside the field to draw on research from a variety of cognate areas.
Typically, that research has involved your typical participants. We are beginning to tease away at some of the issues of how
individuals with complex communication needs then may process information, but we are well aware of the fact that...
that may be very similar, or it maybe very different in some cases.
There may be effects of different disabilities, effects from associated sensory perceptual or motor impairments,
effects of age and life experience, environment, culture, etcetera.
And, we really need to begin to get at these processes.
Dr. Melanie Fried-Oken (Oregon Health & Science University) -- So, where do we go from here? So we have given you
three -- just three.
We could come up with 3000 challenges that are examples of how many language and cognitive science considerations need to be
addressed in the design of AAC technologies.
Our next steps for us to discuss as a group are, What are the most important issues to address first?
Do we take the lowest hanging fruit, or do we go for the greatest challenges?
And, how can the research have the greatest impact for individuals with complex communication needs?
Thank you.