Friday, September 19, 2014

Medicare Coverage of Speech Generating Devices (SGD)

The Centers for Medicare and Medicaid Services (CMS) has issued a number of different policy changes regarding coverage for speech generating devices (SGDs) that have created some confusion and raised significant concerns within the ALS community. The ALS Association’s Public Policy Department has worked on these issues since they first arose and continues to advocate to ensure that people with ALS have access to communications devices that are so vital to living with this disease. The issues are summarized below and include what action The Association is taking and what people with ALS can do to help.

Capped Rental:

Beginning on April 1, 2014, Medicare is changing how it pays for SGDs, switching to a system called “capped rental.” Since 2001, people with ALS always have had the option of renting SGDs; however the overwhelming majority purchases them. Beginning on April 1, people no longer will have that option. Instead, they will be required to rent the device over a 13 month period. During the capped rental period, people with ALS will be contacted each month by the manufacturer to ask whether the SGD will be needed during the next month. As long as the answer is yes, the patient can keep the device. After the 13th month, these monthly questions about further use will stop and the patient will own the device. The payment system change does not apply to anyone on Medicare who currently owns an SGD. Capped rental also does not affect which devices Medicare will cover, or the evaluation and documentation required to support Medicare coverage.
The ALS Association has been actively engaged on this issue since CMS first proposed the change in the summer of 2013. We have submitted formal comments to CMS and have partnered with other organizations, those in industry and with Members of Congress who share our concerns to urge CMS not to make this change. Our concerns are outlined in the comments to CMS, here http://bit.ly/1h2e2X8. They include:
  • Access: If people have an extended hospital stay or are in hospice while they are in the rental period, Medicare will not cover the rental fees. Instead, the device could be returned to the manufacturer while the patient would have to obtain a new one from the hospital or hospice or pay the entire monthly rental fee out-of-pocket.
  • Cost: People who rent SGDs for the full 13 month rental period will pay 5% more out of pocket than if they had purchased the device up front. Most recently, several Members of Congress sent a letter (http://bit.ly/1lX98PA) to CMS urging them to: 1) delay the implementation date; 2) reevaluate the data on which their decision was based (CMS relied on 1987 claims data); and 3) meet with stakeholders. A number of meetings have taken place with CMS and a number of scenarios are possible. Those include:
  • Full implementation on April 1
  • Delay implementation
  • Grandfather certain DME introduced to the market after 1987
  • Grandfather all SGDs since the devices are not intended for short-term use
The ALS Association continues to partner with other organizations to oppose the switch to capped rental and we will continue to strongly communicate that message to CMS. As the regulatory process moves forward, we will provide additional information and will alert the ALS community if grassroots action is needed. If CMS does not act to address our concerns, The Association will pursue other options, potentially including legislation, to ensure people with ALS have access to SGDs that play a critical role in their lives.
In the meantime, if you or someone you know experiences any difficulty accessing SGDs or other Durable Medical Equipment, such as power wheelchairs or accessories, please contact your local ALS Association Chapter immediately. Chapter contact information is available here: http://www.alsa.org/community/. If you do not have a local Chapter, please contact The ALS Association’s Public Policy Department at advocacy@alsa-national.org. The Association will actively provide assistance to anyone experiencing difficulties accessing these devices and also will share these difficulties with CMS officials and Members of Congress so that they fully understand how policy changes impact people with ALS and why they must change.

Dedicated Devices

On February 27, 2014, CMS issued a guideline titled a "coverage reminder" that addresses the types of SGDs and the features of the devices that Medicare will cover. This guideline raises several questions about the features of currently available SGDs, and about the temporary "locking" or "dedication" practice that has been in place since 2001 for computer-based devices. Under current practice, non-medical applications such as email and word processing software, are “locked” on computer-based devices because Medicare will not cover those applications. However, people with ALS subsequently may have the manufacturer “unlock” these additional features by paying a fee, which again is not covered by Medicare.
Both the wording of the document and the manner in which it was issued (coverage reminder as opposed to amending coverage policy) make it unclear exactly what implications it has for SGD coverage now and in the future. There is some speculation that this guideline would end coverage for SGDs or disallow coverage for devices that include non-medical applications such as word processing software, regardless of whether those applications are “locked.” At this time it is just speculation and it is not clear exactly how the guideline impacts current coverage policy. However, it is clear that this guideline does not end Medicare coverage of SGDs and it does not end coverage for computer-based devices, which have been available to Medicare recipients since May 2001. It also does not change the evaluation or documentation required to support Medicare coverage.
As with “capped rental,” The Association is working with other organizations and with industry partners who share our concerns about the potential implications of the guideline. Together we will be working with CMS to clarify the meaning of the guideline and to ensure that people with ALS will continue to have access to SGDs, including computer-based devices. We also will keep the ALS community updated as the regulatory process moves forward and will alert the community if action is necessary.
In the meantime, if you or someone you know experiences any difficulty accessing SGDs, including computer-based devices, please contact your local ALS Association Chapter immediately. Chapter contact information is available here: http://www.alsa.org/community/. If you do not have a local Chapter, please contact The ALS Association’s Public Policy Department at advocacy@alsa-national.org. The Association will actively provide assistance to anyone experiencing difficulties accessing these devices and also will share these difficulties with CMS officials and Members of Congress so that they fully understand how policy changes impact people with ALS and why they must change.
If you have any questions about these issues or would like additional information, please contact the Public Policy Department at advocacy@alsa-national.org.

Tuesday, July 22, 2014

KSU BrainLab develops BCI for Google Glass, aims to improve the quality of life for locked-in people



BrainLab_Google-Glass
Without doubts Google Glass has the unique potential to form the basis for a new generation of portable brain-computer interfaces. Now Neurogadget has the honour to introduce to you one of the first Google Glass Explorers who’s been using Glass in brain-computer interface research.
Adriane Randolph, executive director of Kennesaw State University’s BrainLab, together with her team, has developed a working prototype that takes input from an evoked brain response to trigger the four basic interface commands for Google Glass: swipe left, swipe right, swipe down, and tap to select.
While this isn’t the first time we hear about Google Glass being used for BCI purposes, there are significant differences between BrainLab’s work and other similar projects, namely This Place’s MindRDR application, which uses NeuroSky’s MindWave to let users take and share photos on Facebook, just by thinking.


According to Adriane Randolph, “both the MindRDR app and our system currently use a separate bioamplification system to capture and read brainwaves and transmit feedback to an application on Google Glass. Where the MindRDR appears to be using a continuous brainwave such as alpha according to the placement of the sensor and description, we are using an evoked response called the P300. With this ‘aha’ response, we are instead able to overlay several different commands to control Glass. Thus, a user will be able to control more than taking a picture but instead access all of the functionality of Glass.”
Google Glass dc 6 1024x681 KSU BrainLab develops BCI for Google Glass, aims to improve the quality of life for locked in people
P300
The user is presented with a string of characters from which he/she must select and attend to one. The characters flash in a randomized pattern. When the character the user desires flashes, he/she elicits a neural response approximately 300 milliseconds later, called a P300. This response is noted by the computer and a selection made.
In other words, “while the MindRDR allows the user to take pictures while thinking, the BrainLab has been developing with Glass to completely control the user experience of Glass with the user’s brain. That would be the main difference besides the BrainLab’s project being a long-term research-based project”, adds Josh Pate, BrainLab associate.

Desktop to Mobile

Last summer, Randolph was selected to pilot the wearable technology device, Google Glass. She had big plans for her new accessory beyond its everyday use for checking email, taking photos and surfing the web. She intended to expand her BCI research to a mobile platform.
Within a few months, another key member of her research team was outfitted with Google Glass and their study took a new turn. The wireless platform opened new possibilities in working with those with limited physical capabilities.


Instead of nodding, swiping or talking to give commands to Google Glass, the research team developed a method for controlling a mobile device using only brain waves.
“We believe this is the first working prototype designed for the Google Glass platform. We know that selection-type commands exist using neural input, but we had to figure out how to use that in Google Glass in a way that benefits our research,” Randolph said. “We chose evoked responses which are like an ‘aha’ response that we record as surface EEGs as input signals.”


BrainLab Google Glass  1024x768 KSU BrainLab develops BCI for Google Glass, aims to improve the quality of life for locked in people
Randolph also told Neurogadget that “BrainLab shares This Place’s excitement that Google Glass holds tremendous possibilities for people living locked-in to their bodies, but who are otherwise cognitively intact. We also recognize Google’s technically accurate statement that “Google Glass cannot read your mind” from the perspective that Glass is not doing the actual EEG-recording and filtering needed to interpret brainwaves. However, as a small computer, Glass is taking the results of this separate processing and using it as input to control embedded apps. The real distinction is in how seamlessly the brainwave processing and feedback to an interface can be implemented. Certainly, in a similar vein of deflecting from Glass’ capabilities of facial recognition, it may not wish to stir up another hornet’s nest by extolling mind-reading capabilities.”


Beside being an enthusiastic Google Glass Explorer, Adriane Randolph has been researching brain-computer interfaces for twelve years and received PhD in Computer Information Systems from Georgia State University. She has directed the KSU BrainLab since its founding in 2007 with hopes to improve the quality of life for people with severe motor disabilities.
More info: http://coles.kennesaw.edu/brainlab

Thursday, May 22, 2014

Phrase archive restores lost voices

Phrase archive restores lost voices

Staff Writer 



“I bake sweet-chestnut bread,” a volunteer says into a microphone.


“I no longer understand what’s going on,” she carefully reads out next.

The volunteer, Kotobuki Hayashi, 56, is reading short lines of text popping up on a computer screen in front of her. The phrases have been taken randomly from newspapers and books. Studio staff check for any misreads.


In an hourlong session, Hayashi gets through about 150 phrases that will be used to create synthesized voices for people with amyotrophic lateral sclerosis, also known as ALS or Lou Gehrig’s disease, who can no longer speak.


“I thought it would be great if I could help those people simply by recording my voice,” Hayashi said. “Also, it’s exciting to imagine that fragments of my speech will be used to reconstruct voices.”
Hayashi is one of around 200 volunteers who have participated in a so-called voice bank project that kicked off in November last year. The goal is to reconstruct the voices of ALS sufferers by creating synthetic ones using an archive of other people’s voices.


The technology was developed by a team of researchers led by Junichi Yamagishi, an associate professor at the National Institute of Informatics who specializes in speech synthesis.


ALS is a progressive neurological disease that attacks the nervous system and paralyzes muscles. As it develops, patients lose the ability to speak, and in some cases can lose their voices within six months of diagnosis, experts say. One ALS sufferer widely known worldwide is physicist Stephen Hawking.


According to the Japan Intractable Diseases Information Center, there are some 9,000 people with ALS in Japan. Many communicate by typing into a personal computer or tablet PC by using whatever muscles they still have, and having a synthesized voice read it out loud. But the voice sounds impersonal and robotic.


“Patients very much needed to communicate in their own natural voice. But no such system existed that could provide a personalized synthesized voice for them,” Yamagishi said in a recent interview in Tokyo with The Japan Times.


Yamagishi and his team set out to re-create their own voices, initially in trials at Britain’s Edinburgh University in 2011.


The project has been running for three years and has seen about 600 volunteers take part, with 10 patients using the software. It is considered to be in its evaluation phase.


In Japan, the project is still in its initial phase and Yamagishi needs to collect as many voices as possible. The recording is done at rented studios in Tokyo, Osaka and Nagoya.
He anticipates it will take one or two years to develop a synthesizer but thinks it could help people with ALS and possibly other disorders.


Yamagishi’s system analyzes the recordings, processing them by using statistical models of the components of speech, and produces a basic voice model for each age group, sex and dialect. This model then serves as the framework for synthesizing the patient’s voice.


“It’s like transplanting part of the volunteers’ voices,” he said. “We find donors whose background matches the patients’ voices, such as in terms of age and home town. We then transplant elements of the donors’ voices, such as the speed with which they move their tongues,” to reconstruct the patients’ voices.


Some companies in Japan already conduct personalized voice synthesis by cutting and pasting recordings of the patients’ own voices. However, this requires hours of recording and is physically impossible for some ALS patients or for those who are already mute.


Yamagishi’s technology requires a 5-minute recording of a patient’s voice. Even if some of the words cannot be pronounced, the system can draw on examples from volunteers to guess at the patient’s original pronunciation.


It also helps to hear the voices of the patients’ siblings, Yamagishi said, since close relatives often have a similar accent or tone.


“There is no particular cure for ALS and it’s really hard for their families to see (their loved ones) develop the illness,” Yamagishi said. It’s important for families to have something to help improve the quality of the patients’ lives even a little, and the voice bank may be one of those things, he said.
“Now we are conducting a large-scale demonstration experiment . . . I want many volunteers from all the regions across Japan,” he said.

Friday, March 14, 2014

Reading Brains

By Erica Klarreich
Communications of the ACM, Vol. 57 No. 3, Pages 12-14
10.1145/2567649





patient wearing cap with electrodes
A patient wears a cap studded with electrodes during a demonstration of a noninvasive brain-machine interface by the Swiss Federal Institute of Technology of Lausanne in January 2013.
Credit: Fabrice Coffrini / AFP / Getty Images



Mind reading has traditionally been the domain of mystics and science fiction writers. Increasingly, however, it is becoming the province of serious science.
A new study from the laboratory of Marcel van Gerven of Radboud University Nijmegen in the Netherlands demonstrates it is possible to figure out what people are looking at by scanning their brains. When volunteers looked at handwritten letters, a computer model was able to produce fuzzy images of the letters they were seeing, based only on the volunteers' brain activity.
The new work—which builds on an earlier mathematical model by Bertrand Thirion of the Institute for Research in Computer Science and Control in Gif-sur-Yvette, France—establishes a simple, elegant brain-decoding algorithm, says Jack Gallant, a neuroscientist at the University of California, Berkeley. Such decoding algorithms eventually could be used to create more sophisticated brain-machine interfaces, he says, to allow neurologically impaired people to manipulate computers and machinery with their thoughts.
As technology improves, Gallant predicts, it eventually will be possible to use this type of algorithm to decode thoughts and visualizations, and perhaps even dreams. "I believe that eventually, there will be something like a radar gun that you can just point at someone's head to decode their mental state," he says. "We have the mathematical framework we need, for the most part, so the only major limitation is how well we can measure brain activity."


A Simple Model

In the new study, slated to appear in an upcoming issue of the journal Neuroimage, volunteers looked at handwritten copies of the letters B, R, A, I, N, and S, while a functional magnetic resonance imaging (fMRI) machine measured the responses of their primary visual cortex (V1), the brain region that does the initial, low-level processing of visual information. The research team then used this fMRI data to train a Bayesian computer model to read the volunteers' minds when they were presented with new instances of the six letters.
"It's a very elegant study," says Thomas Naselaris, a neuroscientist at The Medical University of South Carolina in Charleston.
According to Bayes' Law, to reconstruct the handwritten image most likely to have produced a particular pattern of brain activity, it is necessary to know two things about each candidate image: the "forward model," the probability that the candidate image would produce that particular brain pattern; and the "prior," the probability of that particular image cropping up in a collection of handwritten letters. Whichever candidate image maximizes the product of these two probabilities is the most likely image for the person to have seen.
To create the forward model, the research team showed volunteers hundreds of different handwritten images of the six letters while measuring their brain activity, then used machine-learning techniques to model the most likely brain patterns that any new image would produce. To construct the prior, the team again set machine learning algorithms to work on 700 additional copies of each letter, to produce a model of the most likely arrangements of pixels when people write a letter by hand. Both models used simple linear Gaussian probability distributions, making brain decoding into a straightforward calculation, van Gerven says.
"We've shown that simple mathematical models can get good reconstructions," he says.
The research team also experimented with limiting the model's prior knowledge of the world of handwritten letters. If the model's prior information consisted only of images of the letters R, A, I, N, and S, for example, it could still produce decent reconstructions of the letter B, though not as good as when the prior included images of all six letters. The results, van Gerven says, demonstrate the decoding algorithm's ability to generalize—to reconstruct types of letters it has never "seen" before.
The human brain is, of course, the master of this kind of generalization, and this ability goes much farther than simple reconstruction of unfamiliar images. "The visual system can do something no robot can do," Naselaris says. "It can walk into a room filled with things it has never seen before and identify each thing and understand the meaning of it all."
While van Gerven's paper deals only with reconstructing the image a person has seen, other researchers have taken first steps toward deciphering the meanings a brain attaches to visual stimuli. For example, Gallant's group (including Naselaris, formerly a postdoc at Berkeley) has combined data from V1 and higher-order visual processing regions to reconstruct both the image a person has seen and the brain's interpretation of the objects in the image. More recently, in partially unpublished work, the team has done the same thing for movies, instead of still images.
"We are starting to build a repertoire of models that can predict what is going on in higher levels of the vision hierarchy, where object recognition is taking place," Naselaris says.
Other researchers are working on reading a brain's thoughts as it responds to verbal stimuli. For example, in 2010, the laboratory of Tom Mitchell at Carnegie Mellon University in Pittsburgh developed a model that could reconstruct which noun a person was reading. Van Gerven's lab is currently working on decoding the concepts volunteers consider as they listen to a children's story while inside an fMRI scanner.


Probing Thoughts

Most mind-reading research to date has focused on reconstructing the external stimuli creating a particular pattern of brain activity. A natural question is whether brain-decoding algorithms can make the leap to reconstructing a person's private thoughts and visualizations, in the absence of any specific stimulus.
The answer depends on the extent to which, for example, the brain processes mental images and real images in the same way. "The hypothesis is that perception and imagery activate the same brain regions in similar ways," van Gerven says. "There have been hints that this is largely the case, but we are not there yet."
If, Naselaris says, "highly visual processes get evoked when you are just reasoning through something—planning your day, say—then it should be possible to develop sensitive probes of internal thoughts and do something very much like mind-reading just from knowing how V1 works," he says. "But that is a big 'if.' "
Even if mind-reading turns out not to be as simple as decoding V1, Naselaris predicts that as neuroscientists develop forward models of the brain's higher-level processing regions, the decoding models will almost certainly provide a portal into people's thoughts. "I don't think there is anything that futuristic about the idea that in five to 20 years, we will be able to make pictures of what people are thinking about, or transcribe words that people are saying to themselves," he says.
What may prove more difficult, Gallant says, is digging up a person's distant memories or unconscious associations. "If I ask you the name of your first-grade teacher, you can probably remember it, but we do not understand how that is being stored or represented," he says. "For the immediate future, we will only be able to decode the active stuff you're thinking about right now."
Dream decoding is likely to prove another major challenge, Naselaris says. "There is so much we don't understand about sleep," he says. "Decoding dreams is way out in the future; that's my guess."
Part of the problem is that with dreams, "you never have ground truth," Gallant says. When it comes to building a model of waking thoughts or visions, it is always possible to ask the person what he or she is thinking, or to directly control the stimuli the person's brain is receiving, but dreams have no reality check.
The main option available to researchers, therefore, is to build models for reconstructing movies, and then treat a dream as if it were a movie playing in the person's mind. "That is not a valid model, but we use it anyway," Gallant says. "It is not going to be very accurate, but since we have no accuracy now, having lousy accuracy is better than nothing."
In May 2013, a team led by Yukiyasu Kamitani of ATR Computational Neuroscience Laboratories in Kyoto, Japan, published a study in which they used fMRI data to reconstruct the categories of visual objects people experienced during hypnagogic dreams, the ones that occur as a person drifts into sleep. "They are not real dreams, but it is a proof of concept that it should be possible to decode dreams," Gallant says.


Protecting Privacy

The dystopian future Gallant pictures, in which we could read each other's private thoughts using something like a radar gun, is not going to happen any time soon. For now, the best tool researchers have at their disposal, fMRI, is at best a blunt instrument; instead of measuring neuronal responses directly, it can only detect blood flow in the brain—which Gallant calls "the echoes of neural activity." The resulting reconstructions are vague shadows of the original stimuli.
What is more, fMRI-based mind reading is expensive, low-resolution, and the opposite of portable. It is also easily thwarted. "If I did not want my mind read, I could prevent it," Naselaris says. "It is easy to generate noisy signals in an MRI; you can just move your head, blink, think about other things, or go to sleep."
These limitations also make fMRI an ineffective tool for most kinds of brain-machine interfaces. It is conceivable fMRI could eventually be used to allow doctors to read the thoughts of patients who are not able to speak, Gallant says, but most applications of brain-machine interfaces require a much more portable technology than fMRI.

"The hypothesis is that perception and imagery activate the same brain regions in similar ways," van Gervan says.

However, given the extraordinary pace at which technology moves, some more effective tool will replace fMRI before too long, Gallant predicts. When that happens, the brain decoding algorithms developed by Thirion, van Gerven, and others should plug right into the new technology, Gallant says. "The math is pretty much the same framework, no matter how we measure brain activity," he says.
Despite the potential benefit to patients who need brain-machine interfaces, Gallant is concerned by the thought of a portable mind-reading technology. "It is pretty scary, but it is going to happen," he says. "We need to come up with privacy guidelines now, before it comes online."


Further Reading

Horikawa, T., Tamaki, M., Miyawaki, Y., Kamitani, Y.
Neural Decoding of Visual Imagery During Sleep, Science Vol. 340 No., 6132, 639–642, 3 May 2013.
Kay, K. N., Naselaris, T., Prenger, R. J., Gallant, J. L.
Identifying Natural Images from Human Brain Activity, Nature 452, 352–355, March 20, 2008. http://www.ncbi.nlm.nih.gov/pubmed/18322462
Mitchell, T., Shinkareva, S., Carlson, A., Chang, K-M., Malave, V., Mason, R., Just, M.
Predicting Human Brain Activity Associated with the Meanings of Nouns, Science Vol. 320 No. 5880, 1191–1195, 30 May 2008. http://www.sciencemag.org/content/320/5880/1191
Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., Gallant, J. L.
Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies, Current Biology Vol. 21 Issue 19, 1641–1646, 11 October 2011. http://www.sciencedirect.com/science/article/pii/S0960982211009377
Schoenmakers, S., Barth, M., Heskes, T. van Gerven, M.
Linear Reconstruction of Perceived Images from Human Brain Activity, Neuroimage 83, 951-61, December 2013. http://www.ncbi.nlm.nih.gov/pubmed/23886984
Thirion, B., Duchesnay, E., Hubbard, E., Dubois, J., Poline, J. B., Lebihan, D., Dehaene, S.
Inverse Retinotopy: Inferring the Visual Content of Images from Brain Activation Patterns, Neuroimage 33, 1104–1116, December 2006. http://www.ncbi.nlm.nih.gov/pubmed/17029988


Author

Erica Klarreich is a mathematics and science journalist based in Berkeley, CA.


Figures

UF1Figure. A patient wears a cap studded with electrodes during a demonstration of a noninvasive brain-machine interface by the Swiss Federal Institute of Technology of Lausanne in January 2013.


Wednesday, February 26, 2014

32 and 16 Years Ago - Computer Help




From: IEEE Computer - January 2014 - page 16

 

Personal computers aid the handicapped

From: IEEE Computer - January 1982

By: Ware Myers - Computer staff

 

"The personal computer provides a new kind of leverage for bringing aid to the handicapped," declared Paul L. Hazan, director of the First National Search for Applications of Personal Computing to Aid the Handicapped. The search was conducted by the Applied Physics Laboratory of the Johns Hopkins University with funding provided by the National Science Foundation and Radio Shack, a division of Tandy Corporation. The Computer Society was a program associate of the effort.

 

"Over the years a great deal of worthy research has gone on," Hazan continued. Unfortunately the end result of much of the earlier research in the field was costly special-purpose equipment that sometimes ran as much as

$70,000 to $100,000 per individual helped. Consequently, it was difficult to find funds to get the products into the marketplace. Moreover, continued maintenance of special-purpose equipment was difficult and expensive. The final payoff - the number of handicapped helped - was therefore limited.

 

Personal computer leverage. The advent of mass marketed and reasonably priced computers brings with it the potential for change in the existing situation, Hazan pointed out. He mentioned that although it has long been recognized that computers extend an individual's mental reach, in the case of the handicapped (with restricted physical capabilities), the possibility also exists to extend the physical reach of this group of users.

 

If the personal computer can be brought to bear on this problem, there are a number of built-in advantages. First, it is now low enough in cost for the handicapped themselves, or their families and friends, to afford; alternatively, in the workplace an employer can finance it for a potential employee. This makes a large, centrally financed support program unnecessary.

 

Secondly, the infrastructure for the application of personal computers already exists. There is a nationwide - even worldwide - network of dealers, maintenance, and training, and arrangements for the distribution of programs are growing.

 

Finally, personal computers to aid the handicapped constitute a significant business opportunity, both for the makers and marketers of personal computers and for those who construct and sell peripherals and input/output devices.

Given 20 million handicapped in the United States (a conservative estimate), Hazan calculates that if only two percent of them acquire a personal computer, they would create a potential market of 400,000 buyers - a figure in the same ball park as the total number of personal computers sold to date.

Assuming that the average price for the units is $2000, including peripherals, input/output devices, and programs, the actual dollar value of this market is $800,000,000. "Enough for industry to pay attention," Hazan noted. And the two percent market is just a guess. No doubt it will ultimately be much more. The point is that while the personal computer may be just a hobby for the able-bodied, with the proper applications it can become a necessity for people with a variety of disabilities. 

 

The search for applications. The First National Search, announced in November 1980, was an effort to bring grassroots initiatives to bear on the task of finding a variety of methods to apply the personal computer to the needs of the handicapped. It was highlighted by a national competition for ideas, devices, methods, and computer programs to help handicapped people overcome difficulties in learning, working, and successfully adapting to home and community settings.

 

In the spring, orientation workshops were held at major rehabilitation centers throughout the United States to bring together potential "inventors,"

handicapped people, and professionals in the educational, technical, and rehabilitation fields. Over 900 entries were received by the June 30, 1981 deadline.

 

In August regional exhibits were held in ten cities - Boston, New York, Baltimore, Atlanta, Chicago, Houston, Kansas City, Denver, San Francisco, and Seattle. Awards were made to over 100 regional winners. From the pool of regional winners a national panel of judges selected 30 entrants to exhibit their work in Washington, DC. Of these, 28 made it to the Great Hall of the National Academy of Sciences on October 31 and November 1, attracting substantial numbers of the handicapped and those who work with them, as well as three or four television news crews. One of the reporters, himself blind, represented National Public Radio. 

 

The next day the winners were honored at a banquet in the Mayflower Hotel.

This banquet was also attended by government and industry representatives with an interest in the subject. At the dinner the three top-place winners and seven honorable mention recipients were named (see photos and box).

 

During the following two days the 28 winners explained their developments at a workshop held at the Applied Physics Laboratory, near Washington.

Proceedings of this conference containing almost 100 papers - all the regional and national winners - are available from the Computer Society. 

 

What next? The Applied Physics Laboratory has a National Science Foundation grant to study the feasibility of setting up a data base to hold application programs for the handicapped. The search turned up a number of excellent programs and some means of making them available to handicapped users is needed. If the idea is feasible and funding becomes available, a potential user could dial up the data base, select programs of interest from a menu, view a demonstration of the program he selects, and ultimately download it into his own equipment.

 

The First National Search is now history and the word first implies a second.

There seems to be general agreement that the making of inventions is too time-consuming for an annual search to be practical. The receipt of inquiries from 19 countries also suggests that something more than "national" is needed. Hazan expects another search to follow, but there is much work to be done and funds to be raised before it can be launched.

 

Photo Caption:

Lewis F. Kornfeld (left), retired president of Radio Shack, presents the first prize of $10,000 to Harry Levitt of the City University of New York for his Portable Telecommunicator for the Deaf. Levitt programmed a TRS 80 pocket computer to send and receive messages over the telephone via a TRS interface, enabling the deaf to commuhicate with each other or with their normal-hearing friends.

 

The other award winners were

Second Prize ($3000): Mark Friedman, Mark Dzmura, Gary Kiliany, and Drew Anderson - Eye Tracker

 

Third Prize ($1500): Robin L. Hight - Lip Reader Trainer

 

Honorable Mention Awards ($500):

Joseph T. Cohn - Augmentative Communication Devices Randy W. Dipner - Micro-Braille System Sandra J. Jackson - Programs for Learning Disabled David L. Jaffe - Ultrasonic Head Control for Wheelchair Raymond Kurzweil - Reading Machine for Blind Paul F. Schwejda - Firmware Card and Training Disk Robert E. Stepp III - Braille Word Processor

 


(IEEE membership required)

For Those Unable To Talk, A Machine That Speaks Their Voice


fromKPLU

             

Carl Moore, a former helicopter mechanic, was diagnosed with ALS 20 years ago. He has had unusual longevity for someone with ALS but expects someday to rely on his wheelchair and speech-generating device.Carl Moore, a former helicopter mechanic, was diagnosed with ALS 20 years ago. He has had unusual longevity for someone with ALS but expects someday to rely on his wheelchair and speech-generating device.



Carl Moore, a former helicopter mechanic, was diagnosed with ALS 20 years ago. He has had unusual longevity for someone with ALS but expects someday to rely on his wheelchair and speech-generating device.
Carl Moore, a former helicopter mechanic, was diagnosed with ALS 20 years ago. He has had unusual longevity for someone with ALS but expects someday to rely on his wheelchair and speech-generating device.

It's hard to imagine a more devastating diagnosis than ALS, also called Lou Gehrig's disease. For most people, it means their nervous system is going to deteriorate until their body is completely immobile. That also means they'll lose their ability to speak.


So Carl Moore of Kent, Wash., worked with a speech pathologist to record his own voice to use later — when he can no longer talk on his own.


Most ALS patients live only a few years after diagnosis, but Moore, a former helicopter mechanic, is the exception — he was diagnosed 20 years ago. At the beginning, he lost use of his hands, and it wasn't until years later that he found that the symptoms were affecting his speech.


Carl Moore shows some of the phrases he's recorded in his own voice and stored on his speech-generating device.ii
Carl Moore shows some of the phrases he's recorded in his own voice and stored on his speech-generating device.

Carl Moore shows some of the phrases he's recorded in his own voice and stored on his speech-generating device.
Carl Moore shows some of the phrases he's recorded in his own voice and stored on his speech-generating device.


"You can hear my three-shots-of-tequila speech," he says. "And it does get worse as I get tired."
So several years ago, before that slur crept in, he recorded hundreds of messages and uploaded them to the speech device he'll someday rely on. The machine looks like a chunky tablet computer, and it would normally sound like a robot. But now, instead, it will sound like Moore.


"It's almost like preserving a piece of yourself," he says. "I've taken auditory pictures of who I am."
Moore's banked messages range from the practical ("I feel tired") to the absurd ("You know what? Your driving sucks") and somewhere in the middle ("Hey, my butt itches. Would you give it a bit of a scratch?").

Moore is kind of a snarky guy — some of his messages can't be played in decent company. It's a part of his personality that he's rescuing from the disease.


And it's not just for his own benefit. Message banking also helps his caregiver: his wife, Merilyn.
"If it's a computer voice, I think it's harsh," she says, "whereas if it's his own voice, I can feel like he's actually speaking those words."


John Costello, a speech pathologist at Boston Children's Hospital, is credited with inventing the clinical use of voice banking. He says it can make a big difference in people's quality of life.
"If you wanted to say something like, 'You're the love of my life,' having that in synthetic speech is devastating," Costello says.


One patient's wife, he says, contacted him shortly after her husband's death. "She wrote to me that the work that we did was the only bright forward movement. Everything was about loss, except the possibility of communication."


"It gives the patient something to do when they have no control over the disease," she says.
Yet for all its benefits, in Kelley's clinic, only a fraction of patients actually do it.


"The ones that don't do it can't deal with it," she says. "They don't want to think about using an electronic piece of equipment to talk. So most of them nod, smile and do nothing."


Heartbreakingly, many come back hoping to record their voices after it's too late.


Carl, on the other hand, brings a mechanic's pragmatism to the project, and he's clearly having some fun too. Besides letting him razz Merilyn for years to come, the recordings will become an archive for her.


"I see this also as a legacy, which will feel like his presence with me even after he's gone," she says.
So Merilyn wants to make sure Carl has banked the really important things — which raises a question: Where, among the witty barbs and the practical lines, were the messages of tenderness, of intimacy?


"My conversations are mostly sarcastic," he says. "She asked me before we left if I had the phrase 'I love you,' and I realized I didn't."


He says he'll make more recordings at some point — sooner rather than later. The trouble, he says, is his voice has already gone downhill.


"We'll see how it works out. I'm not comfortable with recording my voice as it is," he says.
"I think that it's important that we capture you as you are now," Merilyn says. "We love you as you are now just as much as eight years ago."


"So I will record, 'Yes, dear.' "


Later, Carl dug back through his hard drive and discovered that he had, indeed, recorded himself saying "I love you." He added it to the device that will someday speak for him.

Monday, February 24, 2014

The Eyes Have It-- Eye Gaze Technology

by David Harraway

Eye Gaze technology may provide some people with disabilities with effective access to required communication and computer control functions where other methods prove too difficult or inefficient for them.

Current commercially available systems consist of an eye tracking camera plus a software interface, which allows the person to bring their own computer hardware. Speech generating devices with integrated Eye Gaze units are also available.


Eye Gaze is an Assistive Technology (AT) area that has undergone significant advances in recent years. Two current leading systems available in Australia are Tobii PCEye Go (manufactured by Tobii Technologies, Sweden) and Inteligaze CAM30NT (from Alea Technologies, Germany). Both are sold with advanced mouse control interaction and basic popup onscreen keyboards. When coupled with additional specialized and fully integrated software, these systems can offer increased independence with required communication and computer control functions.


New mainstream eye tracking options are now available and have generated some interest online in Assistive Technology discussion forums. This interest has largely been due to the lower cost of the hardware. My Gaze This is an external link is one of these and was shown at the Eurogamer Expo 2013 to positive reviews. Q4 2013 saw the arrival of the Eye Tribe tracker (theeyetribe.com This is an external link). Just last month, Tobii featured the Tobii Eye X This is an external link at Las Vegas Consumer Electronics Show. Both Eye Tribe and Tobii Eye X are on the market as “developer release” and are available online for between US $100 -200.
It needs to be stated clearly from the outset that the primary purpose of a developer release is to allow programmers to make their new and existing applications compatible with Eye Gaze access. These systems do not and should not be directly compared to fully -integrated assistive technology solutions as they lack the necessary features and setting adjustments that are normally required to make them work for a person with a disability. Both Tobii and Eye Tribe offer a software development (SDK) which allows developers to link the camera to their specific application.


Key differences between a ‘developer’ camera and a fully supported commercial AT system are noted here:
  1. Developer releases ship with minimal software. The Eye Tribe model, for example, offers an eye mouse moving option; but does not provide the sophisticated computer interaction utilities available in Tobii’s Gaze Interaction or Alea’s Desktop 2.0
  2. Both units ship with only limited support from the manufacturer. Eye Tribe have a user forum for developers and users to ask questions. At the time of writing there have been several questions asked and resolved.
  3. There are hardware and operating system requirements that must be satisfied. For example, the Eye Tribe tracker is USB 3 (usually a blue socket) connection only and operating systems below Windows 7 are not supported. However, there is hope for other OS support. In 2013, Eye Tribe showed an earlier version of their system working on an Android tablet (You Tube) and their current promotional video demonstrates what appears to be Apple OSX functionality; and the Eye Tribe site states than an OSX SDK will be available in April 2014.
  4. To purchase a developer kit, you must agree to develop for the product in the licensing agreement. I purchased my Eye Tribe tracker with the intent of learning to code basic Windows applications. Microsoft Visual Studio has a free version and there are tutorials online for learning the programming languages supported. The Eye Tribe developer forum has links to one or two applications already made eye gaze compatible (Fruit Ninja being one).
  5. Eye tracking cameras are known to be subject to environmental effects. Chief among these are the level and direction of light in the room. Other known factors to consider when investigating eye gaze access include: user movement (head control), visual and perceptual skills, and level of intentionality to the task.
This last sub-point is of particular interest among AT professionals and other team members, as at least some of the learning we have had in past few years has been directed towards the possibility of using Eye Gaze systems with people who may present with conditions resulting in an Intellectual Disability. As some people with ID have significant communication challenges, the standard methods for setting up systems (instruct the person to track a calibration plot around the screen and fixate on it until it moves on) may not be relevant. However, software developers such as Sensory Software have made programs such as Look to Learn This is an external link that provide a series of engaging activities which grade up from the most basic (glance past the target) to more complex (choice making and built sequence of actions). A demo version is available with sample activities to explore. As noted elsewhere on this blog, it is also possible to use Clicker 6 with Eye Gaze access, as the program includes dwell click interaction (look at target and hold until the pre-set dwell time is achieved). Tobii’s Eye Gaze Learning Curve This is an external link is an excellent model of how progression through this AT area might look; and provides resources (including video tutorials) and suggested activities.

Review of Eye Tribe Development Kit

The kit ships with :

  • Eye Tribe eye tracker camera
  • 2m USB 3 ribbon cable
  • Small adjustable tripod stand
  • Instructions and link to download the software drivers and SDK
Software installation was simple and progressed with a hitch. It should be noted that the software does not appear to ship with an uninstaller utility.
Once the software was installed the camera was plugged in and the software run. The software runs a tutorial on first run which covers aspects such as positioning the eye tracker camera and also an initial calibration. A calibration score is obtained (I received perfect!) and then an interface screen with options is shown:
Eye Tribe interface screen with options


Options include :
  • API console (shows the eye tracking events in real time data)
  • Online Help (links to the Developer section of the Eye Tribe site)
  • Start in demo mode (shows setup screens at start up)
  • Mouse Gaze (enables eye tracking mouse function)
  • Mouse Smooth (reduces shudder effect in Eye Mouse)
A track status window displays whether the users eyes are in the correct position (indicated by colour and the presence of the eye graphics)
The Calibration screen is also accessible from here:
Eye Tribe Calibration screen  with options


This screen allows the setup of the calibration related functions. Included options are:
  • Number of points of calibration (9, 12, or 16. More points is usually correlated with a more accurate result)
  • Sample rate (time point is held before the next one is offered)
  • Monitor being calibrated (as in the case with multimonitor setups or an external display)
  • Vertical and horizontal alignment
  • Area size (area in which calibration and tracking occurs)
  • Background and Point colours (preferred contrast /colours, custom colours available)
In mouse move mode, I found the tracking to be consistent across the both screens tested (17” widescreen and 13” widescreen). There was a mild amount of tracking shudder when I paused over targets. I used Sensory Software Dwell Clicker 2 to generate mouse click events. This worked effectively and I was able to select targets down to the usual 20mmx20mm level (this is an established threshold of accurate tracking for eye gaze technology).


I also tested the tracking in mouse mode with both Tobii Communicator 4 Sono Key and The Grid 2 Fast Talker 2 pagesets set to dwell click and direct mouse selection. As the Eye Tribe tracker is new, it does not show up in the list of internally supported trackers for either program. At time of testing, the Eye Tribe tracker did not work with The Grid 2 on our machine.


The Eye Tribe tracker is compatible with a basic free AAC program called Gaze Talk This is an external link which was developer for COGAIN www.cogain.org This is an external link a collaboration between EU Eye Gaze manufacturers, academics, users, and other interested parties


In conclusion, currently the Eye Tribe tracker is primarily of interest to technical people and hobbyists; and also for the potential of the business model (release to software developers) to drive innovation in making applications eye-gaze accessible. Crowd sourced funding, and the direct relationships with developers that are now possible because of social media, offer some degree of promise for new directions in Assistive Technology that may make it more accessible and affordable to people wishing to explore these options.




http://www.spectronicsinoz.com/blog/tools-and-resources/the-eyes-have-it/
----------------------------------
About the Writer:
David is an Occupational Therapist at ComTEC Yooralla, a Victorian statewide assistive technology advisory and information service. He assists people with disabilities and their teams to problem solve solutions in the areas of equipment for communication, computer access, mounting, and environmental control. David is also a Clinical Advisor to the Statewide Equipment Program in the area of Environmental Control Units; and has presented at local, national and international conferences. He is passionate about the potential of Assistive Technology to make a difference in the lives of people with disabilities.

Friday, February 21, 2014

Low-cost tech helps brain-injured patients speak

By Tanya Lewis

  • speakyourmind
    The nonprofit SpeakYourMind Foundation built a low-cost eye-tracking system to help stroke patient Maggie Worthen communicate. (YouTube screenshot)
Editor's Note: This writer was a colleague of the founder of SpeakYourMind in Brown University's BrainGate lab.


A week before Maggie Worthen was due to graduate from Smith College, she suffered a severe brain stem stroke that left her unable to move or speak. She was only 22. Maggie's doctors diagnosed her as being in a persistent vegetative state. But Maggie's mother kept looking for a way to get through to her daughter.


Maggie was able to move one eye, and over time it became clear that her mind was intact, but she was essentially trapped inside her body. Using an expensive eye-tracking system, Maggie was able to communicate rudimentarily. But the system was complicated and required a lot of adjustments to work properly, said Maggie's mother, Nancy Worthen.


"There are so many people like [Maggie]," Nancy Worthen said. "They're frustrated because their computer is broken, or doesn't have the right software."


Then, Maggie and her mom met Dan Bacher.


Simple and affordable


Bacher is the founder and executive director of the SpeakYourMind Foundation, a nonprofit in Providence, R.I., that develops low-cost technologies to restore communication to people who lack the ability due to stroke, amyotrophic lateral sclerosis (ALS, or Lou Gehrig's disease), brain injury or other problems. The nonprofit spun off from the BrainGate lab at Brown University, which is developing a brain-computer interface to allow people with paralysis to control computers or a prosthetic arm using their thoughts alone. [5 Crazy Technologies That Are Revolutionizing Biotech]


Maggie started working with SpeakYourMind in July 2013. Bacher and his team developed a prototype eye-tracking tool called "SYMeyes" consisting of a webcam mounted on a pair of what Bacher calls "hipster" glasses, with custom-made software that allows Maggie to answer yes or no questions by moving her eye. The eye-tracker system cost about $30. Comparable systems on the market run about $10,000 to $15,000, Bacher said.









Cathy Hutchinson, 58, was a participant in a clinical trial of the BrainGate system. Cathy suffered a stroke 16 years ago that, like Maggie's, left her paralyzed and unable to speak. Cathy made headlines in 2012 when she used BrainGate to control a robotic arm to pick up and drink from a bottle.


Bacher developed a system that allowed Cathy to spell words by controlling a computer cursor on a virtual keyboard, using signals from the BrainGate implant. Now, he has built a device that allows her to control the cursor by raising an eyebrow. The virtual keyboard also suggests word completions to speed up typing. [Photos: Quadriplegic Woman Uses Mind-Controlled Prosthesis]


"I started building prototypes and solutions while full-time at Brown," Bacher said. "The experience of successfully building a couple of these prototypes made me realize that if I built a bunch, I could really help a lot of people," he told Live Science.


Custom-built solutions


Bacher assembled a team of volunteers and students to develop low-cost, personalized eye-tracking and head-tracking technologies, using basic components available at most electronics stores, a laptop and custom software.


The key insight, Bacher said, is personalization. "It could be taking stuff off-the-shelf or building something completely from scratch it depends on a person's abilities or needs," he added.


Another SpeakYourMind participant, Aaron Loder, 52, has ALS, or Lou Gehrig's disease, a progressive disease that causes degeneration of the nerve cells and spinal cord. After Aaron was diagnosed with ALS, he attended his high-school reunion and remained active on Facebook. But over time he dropped off the map, his classmates said.


Aaron's classmate Maureen Delaney went to visit him in the rehab hospital where he was living, and what she found shocked her. Aaron was on a respirator with a feeding tube, and completely unable to communicate. He didn't have any family to advocate for him either, Delaney said.


Aaron "wants to be able to communicate with the outside world," Delaney told Live Science. "He misses people."


In October 2013, Delaney read an article about SpeakYourMind in the local newspaper, and got in touch with Bacher. Now, SpeakYourMind is developing a version of the eye-tracking glasses to allow Aaron to communicate and control a computer so that he can use Facebook to connect with his friends again.


SpeakYourMind's work aims to help not only Maggie, Cathy and Aaron, but anyone who has difficulty communicating, whether it's because of a brain injury or illness, or even disorders such as autism.


The nonprofit is supported mostly by donations and is currently pursuing a crowd-funding campaign on the website indiegogo, which ends at 2:59 p.m. ET Monday (Feb 17). So far, the campaign has raised more than $22,000.


As for Bacher, "My personal goal," he said, "is to help as many people as possible."
Copyright 2014 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

iOS, Android, MS Surface Pro 2 Comparisons

The following webinar is being offered by ATIA (Assistive Technology Industry Association) and may be helpful to people trying to understand the differences between tablets. 


Detailed Listing

Webinar Type: Live Broadcast
Webinar Code: AT14-WEB06-LB
Webinar Title iOS, Android, MS Surface Pro 2 Comparisons Register now image link to registration page
Speaker(s):
Therese Willkomm, Director of ATinNH, University of New Hampshire
 
Live Webinar Date/Time: Wednesday, February 26, 2014 3:30 - 5:00 PM (Note: All times are Eastern Time Zone)
Webinar Fee:  $49.00

Overview: Not all tablets are the same. This webinar will discuss the differences between an iPad tablet with iOS7; a Galaxy Note 10.1 using the Android operating system; and the Surface Pro 2 using Windows 8.1 operating system. The review and demonstrations will focus on accessibility features as well as available apps to support individuals with disabilities
Learning Objectives:
Participants will be able to describe as least three different accessibility feature for each of the three different operating systems for three different tablets - Android, IOS, and Windows operating systems
Participants will be able to identify and list at least five different apps for each of the three operation systems that can benefit an individual with a disability.
Participants will be able to discuss the core differences in the operation systems on the three tablets related to cognitive impairments, physical impairments, and sensory impairments.
Participants will be able to identify at least five different access methods that can be used with all three devices.

Speaker Bio/s: Therese Willkomm, PhD ATP is the Director of New Hampshire’s State Assistive Technology Program with the Institute on Disability at the University of New Hampshire (UNH) and has a half time clinical faculty appointment in the Department of Occupational Therapy as the Coordinator of the Graduate Certificate Program in Assistive Technology at UNH. She holds a Ph.D. in Rehabilitation Science and Technology and has over 25 years experience in providing/managing assistive technology services for individual with disabilities. She is known nationally and internationally as “The McGyver” of Assistive Technology. And more recently as an expert in Apps and iPad Adaptations. Dr. Willkomm has presented over 500 assistive technology workshops in 38 states, seven foreign countries and three U.S. Territories; conducted 22 national assistive technology webinars; create over 600 assistive technology inventions; created and distributed nationally over 1,000 assistive technology empowerment kits; developed and posted over 350 “How-To” Assistive Technology related video clips on YouTube; and authored 22 publications including her most recent book titled “Assistive Technology Solutions in Minutes – Book 2 – Ordinary Items Extraordinary Solutions

Strand: Physical Access/Mobility - Computer Access/Positioning
Target Audience:  Accessibility Professionals, ADA administrators, AT Professionals, Educators, Family members, Higher Ed personnel, Individuals with Disabilities, OTs, PTs, SLPs, Rehab Engineers, SPED teachers, Voc Rehab Counselors
Archive End Date (if applicable): 02/26/2016