Tag: artificial intelligence

Robots Will Know They’ve Been Blasted With a Shotgun

Light fibers in the silicone foam allow an AI system to detect how it’s being manipulated.

Soft robots could soon be everywhere: the squishy, malleable buggers might lead search and rescue missions, administer medication to specific organs, and maybe even crawl up your butt.

And now, soft robots will know how and when they’ve been bent out of shape — or shot full of holes by Arnold Schwarzenegger.

The trick is to simulate an animal’s peripheral nervous system with a network of fiber optic cables, according to research published Wednesday in the journal Science Robotics.

The Cornell University scientists behind the project hope that the tech could be used to build robots with a sense of whether they’ve been damaged.




Light Show

As the fiber optic cables, encased in a block of smart foam, bend and twist, the pattern and density of the light traveling through them changes in specific ways.

But the differences in light among various movements and manipulations are too minute for a human spot, so the researchers trained a machine learning algorithm to analyze the shifts.

The AI system was trained to track how the light traveling through the fiber optic cables changed based on how researchers bent the foam.

Once it picked up on the patterns, according to the research, the machine learning algorithm could predict the type of bend with 100 percent accuracy — it always knew whether the foam was bent up, down, left, right, or the direction in which it had been twisted.

The whole experimental set-up.

From there, the system could guess the extent to which it had been bent or twist within a margin of 0.06 degrees.

Baby Steps

Someday, technology like this fiber optic network might give rise to robots that could teach themselves to walk, the researchers said.

With this new form of high-tech proprioception, the sense that lets us determine where our limbs are in space without looking, futuristic robots may be able to keep track of their own shape, detect when they’ve been damaged, and better understand their surroundings.

Please like, share and tweet this article.

Pass it on: Popular Science

A Brain Implant Brings a Quadriplegic’s Arm Back to Life

Ian Burkhart lifts a peg using his paralyzed right arm, thanks to a machine interface that can read his thoughts and execute them on his behalf.

Ian Burkhart has been a cyborg for two years now. In 2014, scientists at Ohio State’s Neurological Institute implanted a pea-sized microchip into the 24-year-old quadriplegic’s motor cortex.

Its goal: to bypass his damaged spinal cord and, with the help of a signal decoder and electrode-packed sleeve, control his right arm with his thoughts. Cue the transhumanist cheers!

Neuroengineers have been developing these so-called brain-computer interfaces for more than a decade.

They’ve used readings from brain implants to help paralyzed patients play Pong on computer screens and control robotic arms. But Burkhart is the first patient who’s been able to use his implant to control his *actual *arm.

Over the past 15 months, researchers at the Ohio State University Wexner Medical Center and engineers from Battelle, the medical group that developed the decoder software and electrode sleeve, have helped Burkhart relearn fine motor skills with weekly training sessions.

In a paper in *Nature, *they describe hooking a cable from the port screwed into Burkhart’s skull (where the chip is) to a computer that translates the brain signals into instructions for the sleeve, which stimulates his muscles into moving his wrist and fingers.

When Burkhart thinks “clench fist,” for example, the implanted electrodes record the activity in his motor cortex.

Those signals are decoded in real-time, jolting his arm muscles in all the right places so that his fingers curl inwards. But he can do more than make a fist: Using the one-of-a-kind system, he’s learned to shred a video game guitar, pour objects from a bottle, and pick up a phone.




Card swiping is the most impressive movement right now,” says Herb Bresler, a senior researcher at Battelle. “It demonstrates fine grip as well as coarse hand movements.”

If Burkhart can swipe credit cards after a year, he might play the piano after five—that’s how long similar chips have lasted—because he and the computer have been learning from each other.

But the implant will stop collecting signals in July when it’s removed, even if the chip is still providing good data, because the clinical trial was structured for a two-year period.

In those two years, the computer trained itself on Burkhart’s thoughts, learning which signals translate to what movements, while he figured out how to make commands more clearly (often with the help of visual cues).

That’s the real achievement here. We’ve shown we know how to process the data,” says Bresler. “The chip is a limiting factor. We need to work on new ways of collecting brain signals.

Though similar neuroprosthetics have been helpful in reducing tremors in Parkinson’s patients, they still have a ways to go.

Besides the serious, invasive surgery, there’s always a chance the body will reject an array, blocking any attempts to record and transmit brain signals while ensuring you get patted down at every airport security scanner, forever. “Something will replace this array,” says Bresler.

Future signal collection devices will cover a larger area of the brain and be less invasive.”

Drawbacks aside, the electrode sleeve and decoding software wouldn’t be where they are today without the array driving them.

With improved collection devices, these products could eventually help stroke victims recover by reteaching their brain to use their limbs, while quadriplegics could mount similar systems on their wheelchairs.

At the very least, the neuroprosthetic experiment suggests that in the future, paralysis might not mean dependence—and that deserves a fist bump.

Please like, share and tweet this article.

Pass it on: Popular Science

How The Media Gets AI Alarmingly Wrong

In June of last year, five researchers at Facebook’s Artificial Intelligence Research unit published an article showing how bots can simulate negotiation-like conversations.

While for the most part the bots were able to maintain coherent dialogue, the researchers found that the software agents would occasionally generate strange sentences like: “Balls have zero to me to me to me to me to me to me to me to.

On seeing these results, the team realized that they had failed to include a constraint that limited the bots to generating sentences within the parameters of spoken English, meaning that they developed a type of machine-English patois to communicate between themselves.

These findings were considered to be fairly interesting by other experts in the field, but not totally surprising or groundbreaking.

A month after this initial research was released, Fast Company published an article entitled AI Is Inventing Language Humans Can’t Understand. Should We Stop It?




The story focused almost entirely on how the bots occasionally diverged from standard English – which was not the main finding of the paper – and reported that after the researchers “realized their bots were chattering in a new language” they decided to pull the plug on the whole experiment, as if the bots were in some way out of control.

The ice of AI’s first winter only fully retreated at the beginning of this decade after a new generation of researchers started publishing papers about successful applications of a technique called “deep learning”.

While this was fundamentally a decades-old statistical technique similar to Rosenblatt’s perceptron, increases in computational power and availability of huge data sets meant that deep learning was becoming practical for tasks such as speech recognition, image recognition and language translation.

As reports of deep learning’s “unreasonable effectiveness” circulated among researchers, enrollments at universities in machine-learning classes surged, corporations started to invest billions of dollars to find talent familiar with the newest techniques, and countless startups attempting to apply AI to transport or medicine or finance were founded.

As this resurgence got under way, AI hype in the media resumed after a long hiatus.

In 2013, John Markoff wrote a feature in the New York Times about deep learning and neural networks with the headline Brainlike Computers, Learning From Experience.

Not only did the title recall the media hype of 60 years earlier, so did some of the article’s assertions about what was being made possible by the new technology.

Since then, far more melodramatic and overblown articles about “AI apocalypse”, “artificial brains”, “artificial superintelligence” and “creepy Facebook bot AIs” have filled the news feed daily.

Please like, share and tweet this article.

Pass ot on: Popular Science

Your Child’s New Robot buddy Makes Reading A Lot More Fun

Children are more enthusiastic about reading aloud when they do so to a specially designed robot, psychologists have found.

A team at the University of Wisconsin-Madison built the robot, named Minnie, as a “reading buddy” to children aged 10 to 12.

Over two weeks children became more excited about books and attached to the robot. “After one interaction the kids were generally telling us that it was nice to have someone to read with,” Joseph Michaelis, who led the study, said.




By the end of two weeks they’re talking about how the robot was funny and silly and how they’d come home looking forward to seeing it.”

The research, published in the journal Science Robotics, is the latest in the effort to design machines that may augment learning or provide companionship.

In America, a robot with artificial intelligence acts as a “social mediator” for autistic children, allowing them to communicate with the wider world.

Minnie stayed with study subjects for two weeks.

Minnie, which is 33cm (13in) high, tracked the children’s progress in reading and every few pages reacted with a programmed comment. During a frightening chapter, for instance, it could say: “Oh, wow, I’m really scared.”

It also recommended books, taking into account ability and interests, and most said that it made good choices.

Please like, share and tweet this article.

Pass it on: Popular Science

Facebook And NYU Want To Use AI To Make MRI Exams 10 Times Faster

MRI scans may some day be available for a lot more people in need.

Facebook on Monday said it’s teaming up with NYU School of Medicine’s Department of Radiology to launch “fastMRI,” a collaborative research project that aims to use artificial intelligence to make MRI — magnetic resonance imaging that is 10 times faster.

Doctors and radiologists use MRI scanners to produce images that show in detail a patient’s organs, blood vessels, bones, soft issues and such, which helps doctors diagnose problems.

However, completing a MRI scan can take from 15 minutes to over an hour, according to Facebook’s blog post.

That’s challenging for children and patients in a lot of pain, who can’t lie still for a long time. It also limits how many scans the hospital can do in a day.




If the project succeeds, MRI scans could be completed in about five minutes, thus making time for more people in need to receive scans.

The idea is to actually capture less data during MRI scans, making them faster, and then use AI to “fill in views omitted from the accelerated scan,” Facebook said in its blog post. The challenge is doing this without missing any important details.

Facebook Artificial Intelligence Research, or FAIR, will work with NYU medical researchers to train artificial neural networks to recognize the structures of human body.

The project will use image data from 10,000 clinical cases with roughly 3 million MRIs of the knee, brain and liver. Patients’ names and medical information aren’t included.

We hope one day that because of this project, MRI will be able to replace a x-rays for many applications, also leading to decreased radiation exposure to patients,” said Michael Recht, MD, chair of department of radiology at NYU School of Medicine, in an email statement.

Our collaboration is one between academia and industry in which we can leverage our complementary strengths to achieve a real-world result.

Please like, share and tweet this article.

Pass it on: Popular Science

Aliens May Actually Be Billion-Year-Old Robots

This could ruin a lot of good science fiction movies … and create interesting plots for the next generation of them, not to mention influencing how humans deal with space aliens when they first encounter each other.

A timely article by The Daily Galaxy reviews the study “Alien Minds” by Susan Schneider where the professor and author discusses her theory that our first meeting with an extraterrestrial will be with a billion-year-old robot. Wait, what?

“I do not believe that most advanced alien civilizations will be biological. The most sophisticated civilizations will be postbiological, forms of artificial intelligence or alien superintelligence.”

Susan Schneider is an associate Professor in the Department of Philosophy Cognitive Science Program at the University of Connecticut.

Alien Minds” has been presented at NASA and the 2016 IdeaFestival in Kentucky and was published in The Impact of Discovering Life Beyond Earth.




It is her response to the question: “How would intelligent aliens think? Would they have conscious experiences? Would it feel a certain way to be an alien?

“I actually think the first discovery of life on other planets will probably involve microbial life; I am concentrating on intelligent life in my work on this topic though. I only claim that the most advanced civilizations will likely be post biological.”

Schneider’s theory is based on three components or “observations.”

In her “short window observation,” she presents the idea that a civilization or species that can conquer long-distance space travel is already very close to moving from biological to artificially-intelligent beings.

An example of this “short window” is the relatively brief 120 years it took humans to go from the first radio signals to cell phones.

Some of those species will be much older than us, which is Schneider’s “the greater age of alien civilizations” observation – one accepted by many.

And not just a few generations older but billions of years beyond us, making them far more advanced and intelligent. How much more?

Schneider’s last observation is that any species that can travel to Earth will be intelligent enough to develop robots that they can upload their brains

to. The robots would probably be silicon-based for speed of ‘thinking’ and durability, making them nearly immortal.

Please like, share and tweet this article.

Pass it on: Popular Science

Look Out, Wiki-Geeks. Now Google Trains AI To Write Wikipedia Articles

A team within Google Brain – the web giant’s crack machine-learning research lab – has taught software to generate Wikipedia-style articles by summarizing information on web pages… to varying degrees of success.

As we all know, the internet is a never ending pile of articles, social media posts, memes, joy, hate, and blogs. It’s impossible to read and keep up with everything.

Using AI to tell pictures of dogs and cats apart is cute and all, but if such computers could condense information down into useful snippets, that would be really be handy. It’s not easy, though.

A paper, out last month and just accepted for this year’s International Conference on Learning Representations (ICLR) in April, describes just how difficult text summarization really is.




A few companies have had a crack at it. Salesforce trained a recurrent neural network with reinforcement learning to take information and retell it in a nutshell, and the results weren’t bad.

However, the computer-generated sentences are simple and short; they lacked the creative flair and rhythm of text written by humans. Google Brain’s latest effort is slightly better: the sentences are longer and seem more natural.

Here’s an example for the topic: Wings over Kansas, an aviation website for pilots and hobbyists.

The paragraph on the left is a computer-generated summary of the organization, and the one on the right is taken from the Wikipedia page on the subject.

The model works by taking the top ten web pages of a given subject – excluding the Wikipedia entry – or scraping information from the links in the references section of a Wikipedia article.

Most of the selected pages are used for training, and a few are kept back to develop and test the system.

Also, since it relies on the popularity of the first ten websites on the internet for any particular topic, if those sites aren’t particularly credible, the resulting handiwork probably won’t be very accurate either.

You can’t trust everything you read online, of course.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s AI Sounds Like A Human On The Phone

It came as a total surprise: the most impressive demonstration at Google’s I/O conference yesterday was a phone call to book a haircut. Of course, this was a phone call with a difference.

It wasn’t made by a human, but by the Google Assistant, which did an uncannily good job of asking the right questions, pausing in the right places, and even throwing in the odd “mmhmm” for realism.

The crowd was shocked, but the most impressive thing was that the person on the receiving end of the call didn’t seem to suspect they were talking to an AI.

It’s a huge technological achievement for Google, but it also opens up a Pandora’s box of ethical and social challenges.




For example, does Google have an obligation to tell people they’re talking to a machine? Does technology that mimics humans erode our trust in what we see and hear?

And is this another example of tech privilege, where those in the know can offload boring conversations they don’t want to have to a machine, while those receiving the calls have to deal with some idiot robot?

In other words, it was a typical Google demo: equal parts wonder and worry.

Many experts working in this area agree, although how exactly you would tell someone they’re speaking to an AI is a tricky question.

If the Assistant starts its calls by saying “hello, I’m a robot” then the receiver is likely to hang up. More subtle indicators could mean limiting the realism of the AI’s voice or including a special tone during calls.

Google tells The Verge it hopes a set of social norms will organically evolve that make it clear when the caller is an AI.

Please like, share and tweet this article.

Pass it on: Popular Science

Facebook Researchers Use AI To Fix Blinking In Photos

Facebook seems to be going all in on building a strong AI-footprint for its platform and now the social media giant has published a new study which focuses on some AI-based tools that fix selfies ruined by blinking.

There are times when we try to take the perfect selfie and while the whole frame turns out exactly as we wanted, our eyes blink in between and ruin the picture.

This is where the new Facebook tool based on AI comes in as it can literally replace your closed eyes with an open pair just by studying as well as analysing your previous pictures.

The idea of opening closed eyes isn’t a new one in a portrait, however, the process involves pulling the source material from another photo directly and then transplanting it onto the blinking face.

Adobe has a similar but way more simplified software called Photoshop Elements that has a mode built for this purpose.




When you use Photoshop Elements, the program prompts you to pick another photo from the same session, since it assumes that you took more than one, in which the same person’s eyes are open.

It then uses Adobe’s AI tech, Sensei, in order to try and blend the open eyes from the previous image directly into the shot with the blink.

The Facebook AI tool is somewhat similar to this and there are also some small details which the Adobe’s software can’t always get right such as specific lighting conditions or directions of shadows.

On the other hand, the Facebook Research methodology, the same process of replacing closed eyes with open ones is dependent on a deep neural network that will supply the missing data while using the context of the area present around the closed eyes.

The Facebook AI tool is somewhat similar to this and there are also some small details which the Adobe’s software can’t always get right such as specific lighting conditions or directions of shadows.

On the other hand, the Facebook Research methodology, the same process of replacing closed eyes with open ones is dependent on a deep neural network that will supply the missing data while using the context of the area present around the closed eyes.

This is done by the general adversarial network (GAN) which is a similar technology that used in deep fake videos in which a person’s face is swapped to another person’s body.

GAN then uses the data points present on other images of the same person for reference in order to fill the data needed.

The Facebook AI tool will also use identifying marks in order to help in generating the substitute data.

After that, a process called in-painting will start in order to come up with all the necessary data which will be used for eyelids with actual eyes and this exactly where the hard work of GAN system comes into play as it will now need more than one image of the person in order to use them as a reference, and try to not miss out on any detail.

When will Facebook introduce this tool to the masses? Will it be launched as a new feature for its social media platforms?

Questions like these are in many, however, the Facebook AI tool is still in development stages and only time will tell us what kind of revolution will the GAN system being to the world of selfies.

Please like, share and tweet this article.

Pass it on: Popular Science

MIT’s New AI Can See Through Walls

MIT has given a computer x-ray vision, but it didn’t need x-rays to do it. The system, known as RF-Pose, uses a neural network and radio signals to track people through an environment and generate wireframe models in real time.

It doesn’t even need to have a direct line of sight to know how someone is walking, sitting, or waving their arms on the other side of a wall.

Neural networks have shown up in a lot of research lately when researchers need to create a better speech synthesis model, smarter computer vision, or an AI psychopath.

To train a neural network to do any of these things, you need an extensive data set of pre-labeled items.

That usually means using humans to do the labeling, which is simple enough when you’re trying to make an AI that can identify images of cats.




RF-Pose is based on radio waves, and those are much harder for humans to label in a way that makes sense to computers.

The MIT researchers decided to collect examples of people walking with both wireless signal pings and cameras.

The camera footage was processed to generate stick figures in place of the people, and the team matched that data up with the radio waves.

That combined data is what researchers used to train the neural network. With a strong association between the stick figures and RF data, the system is able to create stick figures based on radio wave reflections.

Interestingly, the camera can’t see through walls. So, the system was never explicitly trained in identifying people on the other side of a barrier.

It just works because the radio waves bounce off a person on the other side of a wall just like they do in the same room. This even works with multiple people crossing paths.

The team noted that all subjects in the study consented to have their movements tracked by the AI.

In the real world, there are clear privacy implications. It’s possible a future version of the technology could be configured only to track someone after they perform a specific movement to activate the system and “opt-in.”

As for applications, it’s not just about spying on you through walls. The MIT team suggests RF-Pose could be of use in the medical field where it could track and analyze the way patients with muscle and nerve disorders get around.

It could also enable motion capture in video games — like Kinect but good.

Please like, share and tweet this article.

Pass it on: Popular Science