Friday, March 15, 2013

Galaxy S4 launch: Samsung pins hopes on eye-tracking in battle with Apple

 

Korean company's new smartphone, positioned as rival to Apple's iPhone, will also be able to translate nine languages
Galaxy S4 by Samsung
The Galaxy S4 launched by Samsung in New York has a 'smart scroll and smart pause' feature that monitors users' eye movements. Photograph: Adrees Latif/Reuters
 
Korea's Samsung turned to song and dance on Thursday as it took a shot at ousting Apple as king of the smartphone.

In a packed Radio City Music Hall in Manhattan, Samsung unveiled the Galaxy S4, the latest iteration of its best-selling smartphone, and set out its challenge to Apple on the US giant's own turf.
In a widely anticipated move, the company unveiled the most eye-catching feature of the new phone. It has pioneered the "smart scroll and smart pause" feature. The facing camera on the handset monitors users' eye movements and behaves accordingly. Tilting the phone while looking at it will scroll web pages and it can even pause a video if a user looks away.

After weeks of teasing, Samsung unveiled a phone it is promoting as "moving beyond touch". With a simple wave of the hand, Minority Report-style, the phone will move a web page or a photograph.
The handset can also translate nine languages, from text to speech and vice versa or just translate text. JK Shin, Samsung's head of mobile, said: "Think about being able to communicate with your friends around the world without language barriers." He called the device "a life companion for a richer, simpler life".

Samsung said that from the end of next month the Galaxy S4 would be offered by 327 mobile operators in 125 countries.

Galaxy S4 by Samsung Samsung's Galaxy S4.
Photograph: Andrew Gombert/EPA


The show-like unveiling featured tap-dancing children to illustrate the Galaxy's 13 megapixel camera, which can record the user as well as his or her subject. Other mobiles offer similar functions.
Michael Gartenberg, analyst at Gartner, gave the handset a cautious welcome: "Features are impressive but a lot of them feel gratuitous. Also a lot of features available for other Android devices through 3rd party apps," he wrote on Twitter.

While detailed reviews are yet to come, analysts said it was clear from the scale of Samsung's launch that the Korean firm has Apple firmly in its sights.

Forrester analyst Charles Golvin said Samsung has no intention of playing Pepsi to Apple's Coca-Cola. "Companies like Pepsi and Avis to some extent played up being number two, it was a point of difference. Samsung wants to be number one. In fact, it's saying if you think Apple is number one, you are wrong."

Golvin said Samsung still trailed Apple in the US smartphone market. "Apple has about 35% and Samsung around 16-17%," he said. But Samsung's overall share was far higher and the market is changing.

Six years after Steve Jobs unveiled the first iPhone, smartphone makers are chasing two very different markets. "There's the upgrade market for those who already have one; those tend to be more well-heeled buyers. And there's new buyers – that tend to be more financially constrained," said Golvin.

Samsung's new Galaxy is firmly aimed at the former and the company has a tight grip on the new, less moneyed buyers too. That strategy has presented Apple with its biggest challenge in smartphones yet.

Carolina Milanesi, consumer devices analyst at Gartner, said: "We are at the point where the majority of sales in this segment come from replacement, not new users. In other words, the addressable market is starting to be saturated and now it is about keeping their refresh cycle short. The problem is, though, that technology innovation is slowing down and as we move to more innovation being delivered via software, the cycles are getting longer rather than shorter." She pointed to Nintendo's Wii U games console, launched last November. In January, Nintendo cut its sales forecast by 27%.
Colin Gillis, technology analyst at BGC Partners, said the smartphone war was turning into a two-horse race. "These guys make 120% of the profits because everyone else is losing money," he said.
But Gillis said that if Samsung was really going to beat Apple at its own game, it would need to keep innovating. "Generating buzz is a great thing to do provided your products are worthy of it. This market is exploding. They will sell 10m of these things out of the gate. But it's not a one player market, they will have to really deliver."

Harry Wang, analyst at Parks Associates, said: "Samsung is trying to do one thing: knock Apple's status in the US. They want to show that they can excite the high-end smartphone adopters just as well as Apple."
Wang said Samsung already had an advantage in the Far East where its brand is better known than Apple but it is now clearly after US consumers. "They are catching up. Samsung is now a formidable brand."

He said the company had done a good job of distancing itself both from Apple and its Android competitors. Samsung spent $410m promoting Galaxy in the US last year, according to a report from Kantar Media, more than Apple, which spent $333m on iPhone ads in 2012.

Samsung used a series of ads to mock iPhone buyers for waiting in long lines and to position Galaxy as a smarter, hipper alternative. But Apple edged Samsung out in sales at the end of 2012. The iPhone 5 was launched last September and was initially marred by backlash over the decision to drop Google Maps for Apple's own, flawed, maps programme. It went on to be the most popular phone in the US during the fourth quarter of 2012. Apple sold 17.7m smartphones during the quarter while Samsung sold 16.8m mobile phones, according to Strategy Analytics' Wireless Device Strategy report. The boost made Apple the US's number one mobile phone vendor for the first time ever, with a record 34% market share.

Thursday, March 14, 2013

Ex-NFL Player Tweets With His Eyes in Fight Against Disease

 

Steve-gleason-present
On Sept. 25, 2006, Steve Gleason blocked probably the most meaningful punt in NFL history. His New Orleans Saints were playing their first home game back in a city that had been devastated by Hurricane Katrina 13 months earlier. The block helped propel the Saints to a poignant 23-3 victory over the Atlanta Falcons. A statue titled "Rebirth" now stands outside the Louisiana Superdome commemorating Gleason's big play.
Gleason was a 5-foot-11, rock solid 212-pound defensive back at the time, a world class athlete playing a gladiator's sport. Today he's 35, confined to a motorized wheelchair and, in his words, has to have "someone else wash my balls." That's thanks to an ongoing battle with the nerve disease amyotrophic lateral sclerosis — better known as ALS or Lou Gehrig's disease — that began two years ago.
But, reminiscent of how he helped inspire a community weakened by natural disaster in 2006, Gleason today gives hope and support to a worldwide community weakened by an incurable disease. And what he does would be impossible if not for powerful technology and the digital connectivity of social media.


Gleason in his playing days; Jim Isaac/Getty Images

A product from the company Tobii lets Gleason use his eyes to control a monitor attached to his chair. He's then able to write messages — including tweets and an email interview for this article — and browse the web despite not having enough muscle function to do so manually. He's also able to move around, speak, open doors and control household appliances thanks to high-tech tools.
But he's not the only one benefitting. His foundation recently constructed the Team Gleason House for Innovative Living, a $25 million, 18-bed skilled nursing facility in New Orleans that he writes is just the second of its kind worldwide and "will allow ALS patients the same technology and level of independence I have."

Gleason also posts personal tweets, signed "SG," to the @Team_Gleason Twitter account (staff members post non-signed messages) and says social media has helped give ALS patients like himself power they never had before.

"Because of the Internet and social media ALS patients are able to share their experiences and knowledge with each other," he writes. "That has played a massive role in the ALS community. We are able to communicate efficiently on topics of treatment, equipment, technology and other resources. Prior to this, ALS patients were isolated and had to rely on their doctors or medical community for advice."

Raising money and awareness, building the high-tech house — Gleason says all that is just the beginning of what he hopes to help people accomplish.

"If we continue to fuel the conversation about ALS and put the brightest people together with the people with the right resources, it can be the most significant impact on ALS in 100 years," he writes.

"Many people and groups are working toward the same goal and collectively, we can all affect the needed change."

You can learn more about Gleason's foundation here.

Assistive Technology Promotes Independence for People in Nursing Facilities

by Disability.Blog Team
Photograph of a grandmother, her son and her grand daughter talking to a doctor

By Guest Blogger Chava Kintisch, Staff Attorney and Assistive Technology Project Director, Disability Rights Network of Pennsylvania

Many people with disabilities living in nursing facilities cannot operate a manual wheelchair or communicate through speech. However, people in these facilities can gain independence through Medicaid-funded assistive technology, such as power wheelchairs and augmentative communication devices. This independence can help support their transition back into the community.

According to the Assistive Technology Act of 2004, 29 U.S.C. § 3002, an assistive technology device is “any item, piece of equipment or product system, whether acquired commercially, modified or customized, that is used to increase, maintain or improve functional capabilities of individuals with disabilities.” This includes durable medical equipment (DME) and an unlimited range of other items that people with disabilities use in their daily lives. Assistive technology services include evaluation, adaptation, training, repairs and maintenance.

Generally, a nursing facility should pay for assistive technology when a person is living in that facility and receives Medicaid. The Nursing Home Reform Act requires a nursing facility that participates in Medicaid to provide services “to attain or maintain the resident’s highest practicable physical, mental and psychosocial well-being.” Services include specialized rehabilitative services, such as physical therapy, occupational therapy and speech-language therapy. Nursing facilities should also provide services to maintain or improve abilities in daily activities, including the ability to bathe; dress; groom; transfer; ambulate; toilet; eat; hear; see and use speech, language, or other functional communication systems. All of this can require the provision of appropriate DME and other assistive technology.

About 15,000 nursing facilities nationwide participate in Medicaid and are subject to these requirements. You can use the Nursing Home Compare tool to find out if a particular nursing facility participates in Medicaid.

Your state may also have programs to encourage nursing facilities to provide appropriate DME. However, nursing facilities that participate in Medicaid should provide medically necessary DME even if these incentives don’t exist in that state. For example, Pennsylvania provides additional Medicaid payments to nursing facilities for certain medically necessary custom and expensive DME, and allows people to take the equipment with them if they leave the facility. However, regulations state that nursing facilities must pay for all medically necessary DME, even if they are not given this additional Medicaid payment.

A 2003 State Medicaid Director Letter addresses ways that states can pay for medical equipment under Medicaid to aid transition of people from nursing facilities back into the community. Check with your state Medicaid office to find out if the assistive technology can go with the person if or when he or she moves out of the facility and into the community. Besides the Pennsylvania program described in the paragraph above, Pennsylvania Medicaid will pay for all medically necessary equipment and supplies for use in the community once the person has a discharge date.

Visit the national assistive technology and transition portal to find out what strategies advocates in your state have used to promote assistive technology for people in nursing facilities. This portal was developed by protection and advocacy agencies and state Assistive Technology Act programs to provide information and models for advocacy initiatives.

In Pennsylvania, the Disability Rights Network of Pennsylvania (DRN) worked with Liberty Resources, Inc., a center for independent living, to increase the provision of evaluations and power wheelchairs at a 450-bed public nursing facility. Staff members then developed an online toolkit using their experiences from this effort. DRN also worked with Pennsylvania’s Initiative on Assistive Technology to increase access to evaluations and augmentative communication devices by nursing facility residents statewide through assessment of communication needs.

Mobility and communication are the means to freedom. Residents of nursing facilities should have access to the full range of medically necessary assistive technology to promote independence and transition to the community. Medicaid law can ensure that this happens.

For More Information:
Additional Articles of Interest from the Disability Rights Network of Pennsylvania:
Chava Kintisch is a staff attorney and Assistive Technology Project Director for the Disability Rights Network of Pennsylvania (DRN). She practices exclusively in the area of civil rights for persons with disabilities, representing individuals with disabilities in order to help them gain access to assistive technology and home and community-based services under Medicaid. She can be reached at ckintisch@drnpa.org. This project is funded by a grant to the Disability Rights Network under Protection and Advocacy for Assistive Technology.

Realtree Teams Up With OtterBox To Create Nearly Indestructible iPad Case


Just saw this posting today….new iPad case says it’s “nearly indestructible”….

I have always used the Griffin Survivor Case and can thankfully say none of the iPads in our loaner closet have ever had a cracked screen or hardware issues.  It’s nice to have another choice now…

--------------------------------

ByOtterbox RealTree
Accessory maker OtterBox recently launched a new rugged iPad case in collaboration with Realtree that will safeguard your tablet like a boss. The Defender Series for the iPad 2 and third and fourth-generation iPad will give you the protection you need without being too bulky or “extreme” looking.
The Realtree Camo case features three layers of protection. It features a built-in screen protector and also includes a durable shield that doubles as a stand so you can display your iPad while watching movies, playing games, and even typing.

The Defender series includes two Realtree Camo designs, the “Xtra” which is bright orange and black to compliment your hunting gear, and the “AP Pink” which is two-toned gray with a pink hued picture on it.

The case is made from high-impact polycarbonate and includes a foam insert to add extra shock-absorption protection. The silicon outer-layer absorbs the bump while the textured exterior offers improvement to the grip. All buttons and ports are covered with a silicone cover to protect from dust and debris.

You can order the Defender Series in Realtree Camo Xtra or AP Pink from OtterBox’s website for $99.99. For a limited time, you can get a free dvd of Monster Bucks XX Vol. 1 or Vol. 2 when you purchase a Defender Series Realtree camo iPad case from OtterBox.


5 Trends That Will Drive The Future Of Technology

 

 
Isabelle Olsson, lead designer of Google's Pro...
Google's Project Glass. (Image credit: AFP/Getty Images via @daylife)

Trends get a bad rap, mostly because they are often equated with fashions. Talk about trends and people immediately start imagining wafer thin models strutting down catwalks in outrageous outfits, or maybe a new shade of purple that will be long forgotten by next season.
Yet trends can be important, especially those long in the making. If lots of smart people are willing to spend years of their lives and millions (if not billions) of dollars on an idea, there’s probably something to it.
Today, we’re on the brink of a new digital paradigm, where the capabilities of our technology are beginning to outstrip our own. Computers are deciding which products to stock on shelves, performing legal discovery and even winning game shows. They will soon be driving our cars and making medical diagnoses. Here are five trends that are driving it all.

1. No-Touch Interfaces

We’ve gotten used to the idea that computers are machines that we operate with our hands. Just as we Gen Xers became comfortable with keyboards and mouses, Today’s millennial generation has learned to text at blazing speed. Each new iteration of technology has required new skills to use it proficiently.
That’s why the new trend towards no-touch interfaces is so fundamentally different. From Microsoft’s Kinect to Apple’s Siri to Google’s Project Glass, we’re beginning to expect that computers adapt to us rather than the other way around.
The basic pattern recognition technology has been advancing for generations and, thanks to accelerating returns, we can expect computer interfaces to become almost indistinguishable from humans in little more than a decade.

2. Native Content

While over the past several years technology has become more local, social and mobile, the new digital battlefield will be fought in the living room, with Netflix, Amazon, Microsoft, Google, Apple and the cable companies all vying to produce a dominant model for delivering consumer entertainment.
One emerging strategy is to develop original programming in order to attract and maintain a subscriber base. Netflix recently found success with their “House of Cards” series starring Kevin Spacey and Robin Wright. Amazon and Microsoft quickly announced their own forays into original content soon after.
Interestingly, HBO, which pioneered the strategy, has been applying the trend in reverse. Their HBO GO app, which at the moment requires a cable subscription, could easily be untethered and become a direct competitor to Netflix.

3. Massively Online

In the last decade, massively multiplayer online games such as World of Warcraft became all the rage. Rather than simply play against the computer, you could play with thousands of others in real time. It can be incredibly engrossing (albeit a bit unsettling when you realize that the vicious barbarian you’ve been marauding around with is actually a 14 year-old girl).
Now other facets of life are going massively online. Khan Academy offers thousands of modules for school age kids, Code Academy can teach a variety of programming languages to just about anybody and the latest iteration is Massively Online Open Courses (MOOC’s) that offer university level instruction. (For a good example, see here).
The massively online trend has even invaded politics, with President Obama recently reaching out to ordinary voters through Ask Me Anything on Reddit and Google Hangouts.

4. The Web of Things

Probably the most pervasive trend is the Web of Things, where just about everything we interact with becomes a computable entity. Our homes, our cars and even objects on the street will interact with our smartphones and with each other, seamlessly.
What will drive the trend in the years to come are two complementary technologies: Near Field Communication (NFC), which allows for two-way data communication with nearby devices and ultra-low power chips that can harvest energy in the environment, which will put computable entities just about everywhere you can think of.
While the Web of Things is already underway, it’s difficult to see where it will lead us. Some applications, such as mobile payments and IBM’s Smarter Planet initiative, will become widespread in just a few years. Marketing will also be transformed, as consumers will be able to seamless access digital products from advertisements in the physical world.
Still, as computing ceases to be something we do seated at a desk and becomes a natural, normal way of interacting with our environment, there’s really no telling what the impact will be.

5. Consumer Driven Supercomputing

Everybody knows the frustration of calling to a customer service line and having to deal with an automated interface. They work well enough, but it takes some effort. After repeating yourself a few times, you find yourself wishing that you can just punch your answers in or talk to someone at one of those offshore centers with heavy accents.
Therein lies the next great challenge of computing. While we used to wait for our desktop computers to process our commands and then lingered for what seemed like an eternity for web pages to load, we now struggle with natural language interfaces that just can’t quite work like we’d like them to.
Welcome to the next phase of computing. As I previously wrote in Forbes, companies ranging from IBM to Google to Microsoft are racing to combine natural language processing with huge Big Data systems in the cloud that we can access from anywhere.
These systems will know us better than our best friends, but will also be connected to the entire Web of Things as well as the collective sum of all human knowledge. The first of these, IBM’s Watson, costs $3 million to build, but that price will drop to about $30,000 in ten years, well within the reach of most organizations.

When Computers Disappear

When computers first appeared, they took up whole rooms and required specialized training to operate them. Then they arrived in our homes and were simple enough for teenagers to become proficient in their use within a few days (although adults tended to be a little slower). Today, my three year old daughter plays with her iPad as naturally as she plays with her dolls.
Now, computers themselves are disappearing. They’re embedded invisibly into the Web of Things, into no-touch interfaces and into our daily lives. While we’ve long left behind loading disks into slots to get our computers to work and become used to software as a service – hardware as a service is right around the corner.
That’s why technology companies are becoming a increasingly consumer driven, investing in things like native content to get us onboard their platform, from which we will sign onto massively online services to entertain and educate ourselves.
The future of technology is, ironically, all too human.

AT&T's mobile personal emergency response system will provide fall detection for seniors and allow them to connect with caregivers using machine-to-machine technology

AT&T is developing a mobile personal emergency response system (MPERS) to help elderly people when they fall. The platform will use GPS functionality and machine-to-machine (M2M) technology to connect patients remotely to medical professionals at monitoring centers.
 

The company is collaborating with Valued Relationships, Inc. (VRI) and Numera on the service. VRI provides telehealth monitoring and medical alert systems to seniors, the chronically ill and patients with disabilities. Numera manufactures a Libris MPERS device that works with monitoring software to remotely manage two-way voice, automatically detect falls and track seniors' location.

"With the MPERS mobility solution AT&T is developing, older people will be able to live independently but know that they are only seconds away from assistance if the need arises," Dr. Geeta Nayyar, AT&T's chief medical information officer, said in a statement.

AT&T will offer the MPERS as a managed service as well as provide wireless connectivity and sales, marketing and customer support.

Announced on March 4, the MPERS use two-way wireless voice communication to allow patients and caregivers to connect using the MPERS. The platform is designed for elderly people with disabilities, those prone to falls and people that need emergency connectivity to caregivers while still living independently.

One out of three adults ages 65 and older fall each year, according to the Centers for Disease Control and Prevention, and 20 to 30 percent of falls result in moderate to severe injuries, including lacerations, hip fractures or head traumas.

The wireless company will market the service to nursing agencies, day care services and home health care providers.

AT&T demonstrated the MPERS platform in New Orleans at the HIMSS13 conference, organized by the Healthcare Information and Management Systems Society (HIMSS).

The MPERS device's GPS functionality will report on a patient's location every 3 to 5 minutes, said Eleanor Chye, assistant vice president for AT&T ForHealth, told eWEEK in an email.

"Should a person fall while wearing the sensor, built-in technology will detect it and automatically alert a monitoring call center," said Chye. "A professional from the call monitoring center will reach out to the individual through instant two-way wireless voice communication on AT&T's network."

Several wireless devices on the market can help seniors when they fall. They include iLoc Technologies' TriLoc Personal Locator Device and the Philips Lifeline GoSafe MPERS.

"When falls and acute medical events (such as heart attacks or strokes) occur, each second that passes matters," said Chye. "Individuals need to be able to immediately alert emergency services and their caregivers when these critical moments happen."

AT&T also worked with VRI to develop a remote-monitoring platform (RPM), which went to market in September 2012. The RPM allows nurses at VRI's telemonitoring facility to monitor patients' vital data, including blood pressure, weight and pulse.

"Having access to additional data through the end-to-end RPM solution, such as a patient's blood sugar levels, weight, and blood pressure, could give providers invaluable information about what triggered the fall," Chye said.

The two platforms could work together to "help provide a clearer picture of what's really going on with a patient when they are outside the four walls of a hospital," said Chye.

Brian T. Horowitz is a freelance technology and health writer as well as a copy editor. Brian has worked on the tech beat since 1996 and covered health care IT and rugged mobile computing for eWEEK since 2010. He has contributed to more than 20 publications, including Computer Shopper, Fast Company, FOXNews.com, More, NYSE Magazine, Parents, ScientificAmerican.com, USA Weekend and Womansday.com, as well as other consumer and trade publications. Brian holds a B.A. from Hofstra University in New York.Follow him on Twitter: @bthorowitz

Monday, March 11, 2013

Brain waves give movement to robotic exoskeleton

European researchers are testing a mind-controlled robotic exoskeleton that could enable fully paralysed people to walk again.
The €2.75m Mindwalker project uses an easily fitted electrode cap placed on the patient’s head to read brain signals related to movement that can be turned into commands for operating the exoskeleton.
The robotic suit itself, which is attached to the patient’s legs, is designed to mimic more closely the way people walk than other exeskeletons that require an additional walking frame or sticks to support the user.

‘Mindwalker was proposed as a very ambitious project intended to investigate promising approaches to exploit brain signals for the purpose of controlling advanced orthosis, and to design and implement a prototype system demonstrating the potential of related technologies,’ said project coordinator Michel Ilzkovitz of the Space Applications Services in Belgium.
/r/c/k/TE_Mindwalker_exoskeleton.jpg
He added that the technology developed for Mindwalker could also have applications in stroke victim rehabilitation and in assisting astronauts rebuild muscle mass after prolonged periods in space.
Once tests with able-bodied trial users are complete, the system will undergo clinical evaluation with five to 10 volunteers suffering from spinal cord injuries, which will help identify any problems and improve performance.

The researchers have developed a brain-neural-computer interface (BNCI) that converts electroencephalography (EEG) signals from the brain, or electromyography (EMG) signals from shoulder muscles, into electronic commands to control the exoskeleton.

To collect the signals, they used technology developed by Berlin-based eemagine Medical Imaging Solutions that consists of a cap covered in electrodes that amplifies and optimises signals before sending them to the neural network.

This contrasts with most other BNCI systems that either require electrodes to be placed directly into brain tissue, or take a long time to fit and use special gels to reduce the electrical resistance at the interface between the skin and the electrodes.

‘The “dry” EEG cap can be placed by the subject on their head by themselves in less than a minute, just like a swimming cap,’ said Ilzkovitz.

The BNCI signals also have to be filtered and processed before they can be used to control the exoskeleton. To achieve this, the Mindwalker researchers fed the signals into a ‘Dynamic recurrent neural network’, a processing technique capable of learning and exploiting the dynamic character of the BNCI signals.

‘This is appealing for kinematic control and allows a much more natural and fluid way of controlling an exoskeleton,’ said Ilzkovitz.

The exoskeleton itself can support a 100kg adult and is powerful enough to recover balance from instability created by the user’s torso movements during walking or a gentle push from the back or side.
It is relatively light, weighing less than 30kg without batteries, and uses springs fitted inside the joints that are capable of absorbing and recovering some of the energy otherwise dissipated during walking, in order to make it more energy efficient.
Unlike most exoskeletons that are designed to be balanced when stationary ­– a property that makes them heavy, slow and require additional support when moving ­– the Mindwalker uses a controlled loss of balance.
‘This approach is called “Limit-cycle walking” and has been implemented using model predictive control to predict the behaviour of the user and exoskeleton and for controlling the exoskeleton during the walk,’ said Ilzkovitz.

Space Applications Services also developed a virtual-reality training platform to allow new users to safely become accustomed to using the system before testing it out in a clinical setting.
Mindwalker was coordinated by Space Applications Services NV and received research funding under the European Union’s Seventh Framework Programme (FP7).


Read more: http://www.theengineer.co.uk/medical-and-healthcare/news/brain-waves-give-movement-to-robotic-exoskeleton/1015711.article#ixzz2NHFLK1FD

Mind Plus Machine: BCI lets you move things with a thought

Brain-computer interfaces let you move things with a thought

 
A man wears a brain-machine interface, equipped with electroencephalography (EEG) devices and near-infrared spectroscope (NIRS) optical sensors in a special headgear to measure slight electrical current and blood flow change occuring in the brain.
A man wears a brain-machine interface, equipped with electroencephalography Photo by Yoshikazu Tsuno/AFP/Getty Images
Behind a locked door in a white-walled basement in a research building in Tempe, Ariz., a monkey sits stone-still in a chair, eyes locked on a computer screen. From his head protrudes a bundle of wires; from his mouth, a plastic tube. As he stares, a picture of a green cursor on the black screen floats toward the corner of a cube. The monkey is moving it with his mind.
The monkey, a rhesus macaque named Oscar, has electrodes implanted in his motor cortex, detecting electrical impulses that indicate mental activity and translating them to the movement of the ball on the screen. The computer isn’t reading his mind, exactly—Oscar’s own brain is doing a lot of the lifting, adapting itself by trial and error to the delicate task of accurately communicating its intentions to the machine. (When Oscar succeeds in controlling the ball as instructed, the tube in his mouth rewards him with a sip of his favorite beverage, Crystal Light.) It’s not technically telekinesis, either, since that would imply that there’s something paranormal about the process. It’s called a “brain-computer interface.” And it just might represent the future of the relationship between human and machine.




 
Stephen Helms Tillery’s laboratory at Arizona State University is one of a growing number where researchers are racing to explore the breathtaking potential of BCIs and a related technology, neuroprosthetics. The promise is irresistible: from restoring sight to the blind, to helping the paralyzed walk again, to allowing people suffering from locked-in syndrome to communicate with the outside world. In the past few years, the pace of progress has been accelerating, delivering dazzling headlines seemingly by the week.
 
At Duke University in 2008, a monkey named Idoya walked on a treadmill, causing a robot in Japan to do the same. Then Miguel Nicolelis stopped the monkey’s treadmill—and the robotic legs kept walking, controlled by Idoya’s brain. At Andrew Schwartz’s lab at the University of Pittsburgh in December 2012, a quadriplegic woman named Jan Scheuermann learned to feed herself chocolate by mentally manipulating a robotic arm. Just last month, Nicolelis’ lab set up what it billed as the first brain-to-brain interface, allowing a rat in North Carolina to make a decision based on sensory data beamed via Internet from the brain of a rat in Brazil.

So far the focus has been on medical applications—restoring standard-issue human functions to people with disabilities. But it’s not hard to imagine the same technologies someday augmenting capacities. If you can make robotic legs walk with your mind, there’s no reason you can’t also make them run faster than any sprinter. If you can control a robotic arm, you can control a robotic crane. If you can play a computer game with your mind, you can, theoretically at least, fly a drone with your mind.

It’s tempting and a bit frightening to imagine that all of this is right around the corner, given how far the field has already come in a short time. Indeed, Nicolelis—the media-savvy scientist behind the “rat telepathy” experiment—is aiming to build a robotic bodysuit that would allow a paralyzed teen to take the first kick of the 2014 World Cup. Yet the same factor that has made the explosion of progress in neuroprosthetics possible could also make future advances harder to come by: the almost unfathomable complexity of the human brain.

From I, Robot to Skynet, we’ve tended to assume that the machines of the future would be guided by artificial intelligence—that our robots would have minds of their own. Over the decades, researchers have made enormous leaps in AI, and we may be entering an age of “smart objects” that can learn, adapt to, and even shape our habits and preferences. We have planes that fly themselves, and we’ll soon have cars that do the same. Google has some of the world’s top AI minds working on making our smartphones even smarter, to the point that they can anticipate our needs. But “smart” is not the same “sentient.” We can train devices to learn specific behaviors, and even out-think humans in certain constrained settings, like a game of Jeopardy. But we’re still nowhere close to building a machine that can pass the Turing test, the benchmark for human-like intelligence. Some experts doubt we ever will: Nicolelis, for one, argues Ray Kurzweil’s Singularity is impossible because the human mind is not computable.
 
Philosophy aside, for the time being the smartest machines of all are those that humans can control. The challenge lies in how best to control them. From vacuum tubes to the DOS command line to the Mac to the iPhone, the history of computing has been a progression from lower to higher levels of abstraction. In other words, we’ve been moving from machines that require us to understand and directly manipulate their inner workings to machines that understand how we work and respond readily to our commands. The next step after smartphones may be voice-controlled smart glasses, which can intuit our intentions all the more readily because they see what we see and hear what we hear.
 
The logical endpoint of this progression would be computers that read our minds, computers we can control without any physical action on our part at all. That sounds impossible. After all, if the human brain is so hard to compute, how can a computer understand what’s going on inside it?

It can’t. But as it turns out, it doesn’t have to—not fully, anyway. What makes brain-computer interfaces possible is an amazing property of the brain called neuroplasticity: the ability of neurons to form new connections in response to fresh stimuli. Our brains are constantly rewiring themselves to allow us to adapt to our environment. So when researchers implant electrodes in a part of the brain that they expect to be active in moving, say, the right arm, it’s not essential that they know in advance exactly which neurons will fire at what rate. When the subject attempts to move the robotic arm and sees that it isn’t quite working as expected, the person—or rat or monkey—will try different configurations of brain activity. Eventually, with time and feedback and training, the brain will hit on a solution that makes use of the electrodes to move the arm.

That’s the principle behind such rapid progress in brain-computer interface and neuroprosthetics. Researchers began looking into the possibility of reading signals directly from the brain in the 1970s, and testing on rats began in the early 1990s. The first big breakthrough for humans came in Georgia in 1997, when a scientist named Philip Kennedy used brain implants to allow a “locked in” stroke victim named Johnny Ray to spell out words by moving a cursor with his thoughts. (It took him six exhausting months of training to master the process.) In 2008, when Nicolelis got his monkey at Duke to make robotic legs run a treadmill in Japan, it might have seemed like mind-controlled exoskeletons for humans were just another step or two away. If he succeeds in his plan to have a paralyzed youngster kick a soccer ball at next year’s World Cup, some will pronounce the cyborg revolution in full swing.

Schwartz, the Pittsburgh researcher who helped Jan Scheuermann feed herself chocolate in December, is optimistic that neuroprosthetics will eventually allow paralyzed people to regain some mobility. But he says that full control over an exoskeleton would require a more sophisticated way to extract nuanced information from the brain. Getting a pair of robotic legs to walk is one thing. Getting robotic limbs to do everything human limbs can do may be exponentially more complicated. “The challenge of maintaining balance and staying upright on two feet is a difficult problem, but it can be handled by robotics without a brain. But if you need to move gracefully and with skill, turn and step over obstacles, decide if it’s slippery outside—that does require a brain. If you see someone go up and kick a soccer ball, the essential thing to ask is, ‘OK, what would happen if I moved the soccer ball two inches to the right?’” The idea that simple electrodes could detect things as complex as memory or cognition, which involve the firing of billions of neurons in patterns that scientists can’t yet comprehend, is far-fetched, Schwartz adds.

That’s not the only reason that companies like Apple and Google aren’t yet working on devices that read our minds (as far as we know). Another one is that the devices aren’t portable. And then there’s the little fact that they require brain surgery.
 
A different class of brain-scanning technology is being touted on the consumer market and in the media as a way for computers to read people’s minds without drilling into their skulls. It’s called electroencephalography, or EEG, and it involves headsets that press electrodes against the scalp. In an impressive 2010 TED Talk, Tan Le of the consumer EEG-headset company Emotiv Lifescience showed how someone can use her company’s EPOC headset to move objects on a computer screen.

Skeptics point out that these devices can detect only the crudest electrical signals from the brain itself, which is well-insulated by the skull and scalp. In many cases, consumer devices that claim to read people’s thoughts are in fact relying largely on physical signals like skin conductivity and tension of the scalp or eyebrow muscles.

Robert Oschler, a robotics enthusiast who develops apps for EEG headsets, believes the more sophisticated consumer headsets like the Emotiv EPOC may be the real deal in terms of filtering out the noise to detect brain waves. Still, he says, there are limits to what even the most advanced, medical-grade EEG devices can divine about our cognition. He’s fond of an analogy that he attributes to Gerwin Schalk, a pioneer in the field of invasive brain implants. The best EEG devices, he says, are “like going to a stadium with a bunch of microphones: You can’t hear what any individual is saying, but maybe you can tell if they’re doing the wave.” With some of the more basic consumer headsets, at this point, “it’s like being in a party in the parking lot outside the same game.”'
 
It’s fairly safe to say that EEG headsets won’t be turning us into cyborgs anytime soon. But it would be a mistake to assume that we can predict today how brain-computer interface technology will evolve. Just last month, a team at Brown University unveiled a prototype of a low-power, wireless neural implant that can transmit signals to a computer over broadband. That could be a major step forward in someday making BCIs practical for everyday use. Meanwhile, researchers at Cornell last week revealed that they were able to use fMRI, a measure of brain activity, to detect which of four people a research subject was thinking about at a given time. Machines today can read our minds in only the most rudimentary ways. But such advances hint that they may be able to detect and respond to more abstract types of mental activity in the always-changing future.

Women authors a book abt her life w/ ALS but using an iPhone and a small thumb movement

For more than a decade, Susan Spencer-Wendel was a reporter for The Palm Beach Post.For more than a decade, Susan Spencer-Wendel was a reporter for The Palm Beach Post.Living A Life Of Joy 'Until I Say Good-Bye'






Until I Say Goodbye


 


Susan Spencer-Wendel knows how to spend a year.

She left her job as an award-winning criminal courts reporter for The Palm Beach Post and went to the Yukon to see the northern lights. Then to Cyprus, to meet family that she never knew. She and her husband, John, took their children on trips on which her daughter got to try on wedding dresses and Susan got kissed by a dolphin.

She also got a new dog, put a splendid hut in her backyard and wrote a book. It's called Until I Say Good-Bye, and it's a memoir about the year she says she wanted to devote to joy — before she's claimed by ALS, a neuromuscular disorder often referred to as Lou Gehrig's disease.

Susan was diagnosed in 2011, at the age of 44. In an interview conducted a few months ago, she tells NPR's Scott Simon that when she got the news, she felt "equally grateful for a life of perfect health and determined to face it with bravery." And a sense of humor, too: Asked how they broke the news to their children, Susan and John say, "Very carefully! Don't overwhelm them with bad news. Just a little bit at a time." John must help speak for his wife, since muscle damage caused by ALS has affected her speech.

She wrote the book — all 89,000 words of it — on an iPhone, using her right thumb, the only finger she had use of. It took three months, and she compares the task to climbing a mountain or finishing a triathlon. "First and foremost, I wrote the book for my family and friends to have, to jog their memories after I'm gone," she says. "I also realized that, as a storyteller, it's an amazing tale of twinning good and bad fortune. Themes that touch everyday people: friendship, devotion and discovery."

Not everything went as planned during Susan's year of joy. She went with her best friend to see the northern lights — but no lights were to be seen. "As you know, life isn't perfect," she laughs. "The lights did not show. Still had a wonderful time there."
For more than a decade, Susan Spencer-Wendel was a reporter for The Palm Beach Post.

For more than a decade, Susan Spencer-Wendel was a reporter for The Palm Beach Post.

Courtesy of Susan Spencer-Wendel
 


For more than a decade, Susan Spencer-Wendel was a reporter for The Palm Beach Post.
Courtesy of Susan Spencer-Wendel


For John, it's been an understandably difficult time. "Every day I wake up, and I feel sad. That's my first emotion," he says. "And then I roll over, and I look at Susan, and I realize that she's not allowing herself to feel that way, so I can't, and I don't." Susan adds that she has down moments but is "generally doing pretty darn well."
That upbeat attitude is reflected in the book, which is genuinely funny. She says she didn't want to write something maudlin. "If you think it's funny, then we've made it home," she says.
We checked in with Susan this week. Asked how she's doing, she wrote: "My pat answer is, as well as can be expected. My body and voice become weaker every single day, but my mind becomes mightier and more quiet. You do indeed hear more in silence."