Tag: Neural networks

A Brain Implant Brings a Quadriplegic’s Arm Back to Life

Ian Burkhart lifts a peg using his paralyzed right arm, thanks to a machine interface that can read his thoughts and execute them on his behalf.

Ian Burkhart has been a cyborg for two years now. In 2014, scientists at Ohio State’s Neurological Institute implanted a pea-sized microchip into the 24-year-old quadriplegic’s motor cortex.

Its goal: to bypass his damaged spinal cord and, with the help of a signal decoder and electrode-packed sleeve, control his right arm with his thoughts. Cue the transhumanist cheers!

Neuroengineers have been developing these so-called brain-computer interfaces for more than a decade.

They’ve used readings from brain implants to help paralyzed patients play Pong on computer screens and control robotic arms. But Burkhart is the first patient who’s been able to use his implant to control his *actual *arm.

Over the past 15 months, researchers at the Ohio State University Wexner Medical Center and engineers from Battelle, the medical group that developed the decoder software and electrode sleeve, have helped Burkhart relearn fine motor skills with weekly training sessions.

In a paper in *Nature, *they describe hooking a cable from the port screwed into Burkhart’s skull (where the chip is) to a computer that translates the brain signals into instructions for the sleeve, which stimulates his muscles into moving his wrist and fingers.

When Burkhart thinks “clench fist,” for example, the implanted electrodes record the activity in his motor cortex.

Those signals are decoded in real-time, jolting his arm muscles in all the right places so that his fingers curl inwards. But he can do more than make a fist: Using the one-of-a-kind system, he’s learned to shred a video game guitar, pour objects from a bottle, and pick up a phone.




Card swiping is the most impressive movement right now,” says Herb Bresler, a senior researcher at Battelle. “It demonstrates fine grip as well as coarse hand movements.”

If Burkhart can swipe credit cards after a year, he might play the piano after five—that’s how long similar chips have lasted—because he and the computer have been learning from each other.

But the implant will stop collecting signals in July when it’s removed, even if the chip is still providing good data, because the clinical trial was structured for a two-year period.

In those two years, the computer trained itself on Burkhart’s thoughts, learning which signals translate to what movements, while he figured out how to make commands more clearly (often with the help of visual cues).

That’s the real achievement here. We’ve shown we know how to process the data,” says Bresler. “The chip is a limiting factor. We need to work on new ways of collecting brain signals.

Though similar neuroprosthetics have been helpful in reducing tremors in Parkinson’s patients, they still have a ways to go.

Besides the serious, invasive surgery, there’s always a chance the body will reject an array, blocking any attempts to record and transmit brain signals while ensuring you get patted down at every airport security scanner, forever. “Something will replace this array,” says Bresler.

Future signal collection devices will cover a larger area of the brain and be less invasive.”

Drawbacks aside, the electrode sleeve and decoding software wouldn’t be where they are today without the array driving them.

With improved collection devices, these products could eventually help stroke victims recover by reteaching their brain to use their limbs, while quadriplegics could mount similar systems on their wheelchairs.

At the very least, the neuroprosthetic experiment suggests that in the future, paralysis might not mean dependence—and that deserves a fist bump.

Please like, share and tweet this article.

Pass it on: Popular Science

Facebook Researchers Use AI To Fix Blinking In Photos

Facebook seems to be going all in on building a strong AI-footprint for its platform and now the social media giant has published a new study which focuses on some AI-based tools that fix selfies ruined by blinking.

There are times when we try to take the perfect selfie and while the whole frame turns out exactly as we wanted, our eyes blink in between and ruin the picture.

This is where the new Facebook tool based on AI comes in as it can literally replace your closed eyes with an open pair just by studying as well as analysing your previous pictures.

The idea of opening closed eyes isn’t a new one in a portrait, however, the process involves pulling the source material from another photo directly and then transplanting it onto the blinking face.

Adobe has a similar but way more simplified software called Photoshop Elements that has a mode built for this purpose.




When you use Photoshop Elements, the program prompts you to pick another photo from the same session, since it assumes that you took more than one, in which the same person’s eyes are open.

It then uses Adobe’s AI tech, Sensei, in order to try and blend the open eyes from the previous image directly into the shot with the blink.

The Facebook AI tool is somewhat similar to this and there are also some small details which the Adobe’s software can’t always get right such as specific lighting conditions or directions of shadows.

On the other hand, the Facebook Research methodology, the same process of replacing closed eyes with open ones is dependent on a deep neural network that will supply the missing data while using the context of the area present around the closed eyes.

The Facebook AI tool is somewhat similar to this and there are also some small details which the Adobe’s software can’t always get right such as specific lighting conditions or directions of shadows.

On the other hand, the Facebook Research methodology, the same process of replacing closed eyes with open ones is dependent on a deep neural network that will supply the missing data while using the context of the area present around the closed eyes.

This is done by the general adversarial network (GAN) which is a similar technology that used in deep fake videos in which a person’s face is swapped to another person’s body.

GAN then uses the data points present on other images of the same person for reference in order to fill the data needed.

The Facebook AI tool will also use identifying marks in order to help in generating the substitute data.

After that, a process called in-painting will start in order to come up with all the necessary data which will be used for eyelids with actual eyes and this exactly where the hard work of GAN system comes into play as it will now need more than one image of the person in order to use them as a reference, and try to not miss out on any detail.

When will Facebook introduce this tool to the masses? Will it be launched as a new feature for its social media platforms?

Questions like these are in many, however, the Facebook AI tool is still in development stages and only time will tell us what kind of revolution will the GAN system being to the world of selfies.

Please like, share and tweet this article.

Pass it on: Popular Science

Google Discovers New Planet Which Proves Solar System Is Not Unique

The Kepler-90 star system has eight planets, like our own

Google has previously discovered lost tribes, missing ships and even a forgotten forest. But now it has also found two entire planets.

The technology giant used one its algorithms to sift through thousands of signals sent back to Earth by Nasa’s Kepler space telescope.

One of the new planets was found hiding in the Kepler-90 star system, which is around 2,200 light years away from Earth.

The discovery is important because it takes the number of planets in the star system up to eight, the same as our own Solar System. It is the first time that any system has been found to have as many planets ours.

Andrew Vanderburg, astronomer and Nasa Sagan Postdoctoral Fellow at The University of Texas, Austin, said: “The Kepler-90 star system is like a mini version of our solar system.

You have small planets inside and big planets outside, but everything is scrunched in much closer.

“There is a lot of unexplored real estate in Kepler-90 system and it would almost be surprising if there were not more planets in the system.”




The planet Kepler-90i, is a small rocky planet, which orbits so close to its star that the surface temperature is a ‘scorchingly hot’ 800F (426C). It orbits its own sun once every 14 days.

The Google team applied a neural network to scan weak signals discovered by the Kepler exoplanet-hunting telescope which had been missed by humans.

Kepler has already discovered more than 2,500 exoplanets and 1,000 more which are suspected.

The telescope spent four years scanning 150,000 stars looking for dips in their brightness which might suggest an orbiting planet was passing in front.

Although the observation mission ended in 2013, the spacecraft recorded so much data during its four year mission that scientists expect will be crunching the data for many years to come.

The new planet Kepler-90i is about 30 per cent larger than Earth and very hot.

Christopher Shallue, senior software engineer at Google AI in Mountain View, California, who made the discovery, said the algorithm was so simple that it only took two hours to train to spot exoplanets.

Test of the neural network correctly identified true planets and false positives 96 percent of the time. They have promised to release all of the code so that amateurs can train computers to hunt for their own exoplanets.

Machine learning will become increasingly important for keeping pace with all this data and will help us make more discoveries than ever before,” said Mr Shallue.

This is really exciting discovery and a successful proof of concept in using neural networks to find planets even in challenging situations where signals are very weak.

We plan to search all 150,000 stars, we hope using our technique we will be able to find lots of planets including planets like Earth.”

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s Neural Network Is A Multi-Tasking Pro Can Tackle Eight Tasks At One Time

Neural networks have been trained to complete a number of different tasks including generating pickup lines, adding animation to video games, and guiding robots to grab objects.

But for the most part, these systems are limited to doing one task really well. Trying to train a neural network to do an additional task usually makes it much worse at its first.

However, Google just created a system that tackled eight tasks at one time and managed to do all of them pretty well.

The company’s multi-tasking machine learning system called MultiModal was able to learn how to detect objects in images, provide captions, recognize speech, translate between four pairs of languages as well as parse grammar and syntax. And it did all of that simultaneously.





The system was modeled after the human brain. Different components of a situation like visual and sound input are processed in different areas of the brain, but all of that information comes together so a person can comprehend it in its entirety and respond in whatever way is necessary.

Similarly, MultiModal has small sub-networks for audio, images and text that are connected to a central network.

multitasking

The network’s performance wasn’t perfect and isn’t yet on par with those of networks that manage just one of these tasks alone. But there were some interesting outcomes.

The separate tasks didn’t hinder the performance of each other and in some cases they actually improved it.

In a blog post the company said, “It is not only possible to achieve good performance while training jointly on multiple tasks, but on tasks with limited quantities of data, the performance actually improves. To our surprise, this happens even if the tasks come from different domains that would appear to have little in common, e.g., an image recognition task can improve performance on a language task.”

MultiModal is still being developed and Google has open-sourced it as part of its Tensor2Tensor library.

Please like, share and tweet this article.

Pass it on: New Scientist