Tip:
Highlight text to annotate it
X
AEGIS has been running for just about 4 years now,
and in that time, in addition to developing the Open Accessibility Framework
– the theoretical underpinning of our work –
we have proven that framework in the desktop, web and mobile spaces.
We developed the Open Accessibility Framework by learning from the built environment.
In the built environment, you have several steps to create something that is accessible,
and several steps to use something that is accessible.
And then in AEGIS, we apply these concepts, the creation steps and the use steps, to ICT.
In the physical world, the first step in creating something that is accessible
is defining what “accessible” means.
How wide should a door be in order for a wheelchair to be able to get through it?
What is the angle of the wheelchair ramp that you need
in order to allow somebody to go up that ramp in a wheelchair?
If we have an elevator, what tones should it make?
Where should we indicate Braille?
What additional tactile symbols should we use
to indicate the main floor in an elevator?
Similarly in the ICT world, we need to define what “accessible” means.
What is the keyboard operation technique for this user interface?
Is this a user interface that can take on a high contrast theme or a large print theme?
And what is the accessibility API or the accessibility services
that allow assistive technologies
to programmatically interact with applications?
The second step in creating an accessible built world
is the stock building materials.
When somebody goes to build a building, they buy stock doors.
Well, the stock doors should implement the definition of accessibility.
They should be wide enough for a wheelchair.
When I go to Schindler Lifts to buy an elevator and put it in my building,
the elevator should be designed already from the get-go
to make tones on every floor.
When I buy a placard to put inside the elevator
with the main floor, the symbols are already there
from the manufacturer of those placards.
In the computer world and in the ICT world, the same concepts apply.
If I am creating a user interface with menus that pull down, dialog boxes,
and user interface elements like checkboxes and sliders,
they should already be keyboard operable.
They should already implement the accessibility API.
They should theme so that I can ask them to be in high contrast or large print,
and automatically they change to that.
In other words, they should implement and support the definition of accessibility from step one.
The third step is looking particularly at developers.
The tools that developers use to build their applications
should make it very easy to create accessible things.
In the built environment, these are manuals and standards for construction,
these are specifications that go to builders.
These are even physical tools that help measure the wheelchair ramp,
or otherwise check the amount of force needed
to open a door to define it an accessible door.
Similarly, the developer tools used by software engineers should make it easy
to drag and drop accessible components into my application.
They should allow me to simulate in advance what the themes look like.
How a user with disability might view my application
through high contrast or large print.
These developer tools might also highlight and fix accessibility errors.
So, that’s the creation side. We also have the use side.
In the built environment, we want to make sure that our accessible building is located
near public transit.
We want to make sure that our accessible building
has a wheelchair ramp up to the entrance.
If you are around buildings that are accessible
you want the crosswalks to be accessible
to have the buttons that are findable,
perhaps emit, tones or voice to help people
get to your otherwise accessible building.
In the ICT world, we want to make sure that our applications are running on an accessible platform.
Does the platform – the Operating System –
expose the accessibility API to assistive technologies?
Is there support for loading assistive technologies?
We have security environments in our mobile phones and other devices that we need to
negotiate assistive technology loading,
and recognizing that it isn’t just a virus.
Is there a way for the user to select
a high contrast theme that applies to the entire platform?
Are there support libraries for text-to-speech or for Braille,
or other things that assistive technologies need?
The fifth use-step, is… well, we need the accessible building to be built.
We need the accessible applications to be built.
The sixth and final step is disseminating the devices that people with disabilities use
when they’re interacting with the public spaces, with the accessible buildings.
Wheelchair ramps are of little use unless people have wheelchairs.
So we need to disseminate wheelchairs. We need to disseminate hearing aids.
Disseminate seeing-eye dog for the blind,
canes for the blind and train the users in their use.
Similarly, in the ICT world,
we need to make sure that the assistive technologies
are disseminated to people who need them.
The screen readers are available, screen magnification systems are available,
the on-screen keyboards are available. And users are trained in how to use these.
Starting in the web space, we have made significant enhancements and improvements
to the accessible rich Internet application specification
and implemented ARIA,
with our open source colleagues from IBM and elsewhere,
implemented it directly into the open source Firefox web browser.
That implementation also includes exposing ARIA
on Windows through IAccessible 2
exposing ARIA on Unix through the AT-SPI interface in GNOME environment.
This leadership that we have done in Firefox has been
copied by Internet Explorer, by Apple Safari…
It is also in WebKitGTK on the GNOME environment,
and it's being adopted by the Opera web browser.
So the early support that we put behind ARIA, and building it into Firefox,
has been leading the industry in the use of ARIA.
Also, we have been implementing ARIA support
in 3 different user interface component sets: jQuery UI,
Fluid – Infusion,
and MooTools.
- Projects.
Websites.
Website 1.
N, 3 auto-complete options, Netherlands, New-Zealand...
- We have been taken those user interface components,
and we have made it easy to build applications using those components through an open-source plugin
we developed for NetBeans,
so that developers of NetBeans can now much more easily create accessible web applications.
We have then built a number of sample of accessible web applications,
including an accessible calendar, and an accessible mapping application using SVG maps
for helping blind users get an understanding of a place before they travel there.
For authors of blogs, or developers of CMS systems,
we then took these component sets
and wrapped them in the appropriate wrapper for WordPress,
so that anyone using WordPress now has a set of rich accessible components
they can build WordPress-based CMS or Web blogs around.
We’ve done something very similar in the JAVA mobile space,
where we have first defined the AEGIS mobile accessibility API : AMIA.
We then implemented AMIA on the lightweight UI toolkit (LWUIT),
on LCD UI, and for AWT for JAVA ME environments.
We defined and implemented theme-support for the lightweight UI toolkit.
We added support in the lightweight UI toolkit resource editor,
and the NetBeans consumer of resource editor output,
making it easy for developers to create accessible LWUIT applications for Java mobile.
We did something similar for android applications, with the open-source DroidDraw,
which we added additional accessibility support into.
We then created a number of accessible mobile applications
including an accessible web browser,
a media player using LWUIT, accessible phone dialer,
contact manager for LCD UI LWUIT and Android.
And then, an accessible real-time text application for the deaf, for Java Mobile.
In addition to these holistic OAF implementations,
we also built a number of third-generation assistive technologies
that took advantage of the theoretical framework and the implementation of that framework.
So, in the Android environment, we built a very innovative assistive technology
for people with severe physical impairments, called Tecla.
Tecla allows someone in a power wheelchair who has one of a variety of ways of operating that chair,
perhaps joystick controlled by their chin, or a switch that they can operate with their shoulder…
however they operate their wheelchair, we can then take the feed coming of those inputs
and drive an Android phone wirelessly over Tecla.
Through Tecla, we can enter text, we can send and receive messages, emails, we can browse the Web…
Well, anything you can do with a phone practically, you can do with Tecla.
We built similarly a very advanced AAC application that allows practitioners
to create concept coding supported AAC interfaces in keyboards
that can then be used by somebody on an Android device for communication.
And because the symbol sets that we use are symbols that we made into fonts,
we can now use those symbols for symbol-to-symbol communication over text messages,
over email, over any other way that you would normally communicate.
In text, you can now communicate with symbols, using CCF-Droid.
We made important strides for authors by building a plugin for
OpenOffice and LibreOffice called AccessODF.
This plugin will review your document and highlight and help you fix any accessibility errors,
things like not properly titling your document,
not using headers properly, not attaching image descriptions to your images.
Once you have created an accessible document,
you can not only export it accessibly to PDF as normal,
but through the AEGIS-developed odt2Daisy plugin,
you can immediately create a digital audio talking book,
or with the odt2Braille plugin,
you can create and emboss a Braille book from your document.
We also have concept coding support built-in as another plugin to OpenOffice and LibreOffice,
allowing people with language and learning impairments to use symbol support
in creating documents.
They can either create the text by choosing symbols from a symbol keyboard,
or if they’re writing text from the keyboard using normal letters,
symbols will appear every time they finish a word
to help support them if they’re not sure what the word means.
This is also very useful for practitioners who want to create symbols for people.
They can type the English and it will do an automatic symbol lookup,
so that they can get the symbols for creating a symbol communication game.
We’ve also made a number of other notable contributions to the community.
We have made a number of significant improvements
to the open source desktop of Linux or Solaris with GNOME.
We improved the underlying interprocess communication mechanism
of accessibility information for GNOME,
which is now built in GNOME 3.
And it enables accessible environments based on Linux to run on much smaller devices
with much less memory or processor capabilities as compared to what was there before.
We created an incredible magnification framework for the open source GNOME environment,
and then used this framework to build a magnifier.
So I’ll move the preferences over here.
Currently the magnifier is off.
So we're going to turn it on, and we have the bottom half of magnification with a factor of 2.
Now we’re going to make this window a movable lens,
So this is a bit like moving a magnifying glass over the screen.
This magnifier is now part of GNOME 3.
It has been shipping for over a year now,
and it is being used by low-vision users throughout Europe.
We’ve made a number of improvements to the open source eSpeak text-to-speech library,
improving the pronunciation in Spanish, and in Greek,
and in a number of European languages.
We built an accessibility regression-testing infrastructure for GNOME
to help find places
where the accessibility has maybe regressed from where it was before,
or to test new GNOME applications to make sure they are properly
implementing the accessibility APIs and framework.
We created a rich set of Creative Commons licensed personas,
which are being adopted in accessibility training courses
and by developers who want to understand
the use-cases and personas of various people with disabilities
in order to better create accessible applications.
And many many more things�