Tag: artificial intelligence

This Robot Dog Can Recover From a Vicious Kick Using Artificial Intelligence

Researchers at ETH Zurich in Switzerland taught a four-legged robot dog a valuable life skill: how to get up again after it gets knocked down. And yes, it involved evil scientists kicking and shoving an innocent robot.

The researchers used an AI model to teach ANYmal, a doglike robot made by ANYbotics, how to right itself after being knocked onto its side or back in a variety of physical environments — as opposed to giving the robot a detailed set of instructions for only one specific environment.




But It Gets Up Again

The results were published in a paper today on Science Robotics. In simple terms, the robot tried again and again to right itself in simulation, and learned from instances when a movement didn’t end up righting it.

It then took what it learned and applied it to the real world.

It even learned how to run faster than it could before. Thanks to the neural network, ANYmal was also able to reach 1.5 metres per second or just over three mph in mere hours, according to a report.

Never Gonna Keep It Down

Are we inching closer to a future where robot guard dogs chase us down to exact revenge on us, as seen on Netflix’s Black Mirror? Sure looks like it.

So perhaps it’s time to stop kicking robot dogs — before we know it, they’ll start learning how to protect themselves.

Please like, share and tweet this article.

Pass it on: New Scientist

New AI Can Spot Tell-Tale Signs Of A Genetic Disorder By Scanning People’s Faces

An artificially intelligent computer program has been used to identify rare genetic diseases by studying photos of faces.

In the experiment, the AI system out-performed human experts attempting the same task.

The face analysis program, known as DeepGestalt, could in future assist the diagnosis of rare genetic syndromes, say researchers.

At the same time they warn that safeguards are needed to prevent abuse of the technology.

Easily accessible portrait photos could, for instance, enable potential employers to discriminate against individuals with ‘at risk’ facial features.

Study co-author Dr Karen Gripp, from the US company FDNA which developed the program, said: “This is a long-awaited breakthrough in medical genetics that has finally come to fruition.

With this study, we’ve shown that adding an automated facial analysis framework, such as DeepGestalt, to the clinical workflow can help achieve earlier diagnosis and treatment, and promise an improved quality of life.




The team trained the “deep learning” software using more than 17,000 facial images of patients with more than 200 different genetic disorders.

In subsequent tests DeepGestalt successfully included the correct syndrome in its top 10 list of suggestions 91 percent of the time.

The system also out-performed clinical experts in three separate trials. Many genetic disorders are associated with distinct facial features.

Some are easily recognizable while others are harder to spot.

People with Williams syndrome, for instance, have short, upturned noses and mouths, a small jaw and a large forehead.

Well-known features associated with Down’s syndrome include almond-shaped eyes, a round, flat face, and a small nose and mouth.

Yaron Gurovich, chief technology office at FDNA and first author of the research published in the journal Nature Medicine, said: “The increased ability to describe phenotype in a standardized way opens the door to future research and applications, and the identification of new genetic syndromes.

Writing in the journal, the researchers drew attention to the potential risk of abuse of the technology.

They warned: “Unlike genomic data, facial images are easily accessible.

“Payers or employers could potentially analyse facial images and discriminate based on the probability of individuals having pre-existing conditions or developing medical complications.”

Please like, share and tweet this article.

Pass it on: New Scientist

Robots Will Know They’ve Been Blasted With a Shotgun

Light fibers in the silicone foam allow an AI system to detect how it’s being manipulated.

Soft robots could soon be everywhere: the squishy, malleable buggers might lead search and rescue missions, administer medication to specific organs, and maybe even crawl up your butt.

And now, soft robots will know how and when they’ve been bent out of shape — or shot full of holes by Arnold Schwarzenegger.

The trick is to simulate an animal’s peripheral nervous system with a network of fiber optic cables, according to research published Wednesday in the journal Science Robotics.

The Cornell University scientists behind the project hope that the tech could be used to build robots with a sense of whether they’ve been damaged.




Light Show

As the fiber optic cables, encased in a block of smart foam, bend and twist, the pattern and density of the light traveling through them changes in specific ways.

But the differences in light among various movements and manipulations are too minute for a human spot, so the researchers trained a machine learning algorithm to analyze the shifts.

The AI system was trained to track how the light traveling through the fiber optic cables changed based on how researchers bent the foam.

Once it picked up on the patterns, according to the research, the machine learning algorithm could predict the type of bend with 100 percent accuracy — it always knew whether the foam was bent up, down, left, right, or the direction in which it had been twisted.

The whole experimental set-up.

From there, the system could guess the extent to which it had been bent or twist within a margin of 0.06 degrees.

Baby Steps

Someday, technology like this fiber optic network might give rise to robots that could teach themselves to walk, the researchers said.

With this new form of high-tech proprioception, the sense that lets us determine where our limbs are in space without looking, futuristic robots may be able to keep track of their own shape, detect when they’ve been damaged, and better understand their surroundings.

Please like, share and tweet this article.

Pass it on: Popular Science

A Brain Implant Brings a Quadriplegic’s Arm Back to Life

Ian Burkhart lifts a peg using his paralyzed right arm, thanks to a machine interface that can read his thoughts and execute them on his behalf.

Ian Burkhart has been a cyborg for two years now. In 2014, scientists at Ohio State’s Neurological Institute implanted a pea-sized microchip into the 24-year-old quadriplegic’s motor cortex.

Its goal: to bypass his damaged spinal cord and, with the help of a signal decoder and electrode-packed sleeve, control his right arm with his thoughts. Cue the transhumanist cheers!

Neuroengineers have been developing these so-called brain-computer interfaces for more than a decade.

They’ve used readings from brain implants to help paralyzed patients play Pong on computer screens and control robotic arms. But Burkhart is the first patient who’s been able to use his implant to control his *actual *arm.

Over the past 15 months, researchers at the Ohio State University Wexner Medical Center and engineers from Battelle, the medical group that developed the decoder software and electrode sleeve, have helped Burkhart relearn fine motor skills with weekly training sessions.

In a paper in *Nature, *they describe hooking a cable from the port screwed into Burkhart’s skull (where the chip is) to a computer that translates the brain signals into instructions for the sleeve, which stimulates his muscles into moving his wrist and fingers.

When Burkhart thinks “clench fist,” for example, the implanted electrodes record the activity in his motor cortex.

Those signals are decoded in real-time, jolting his arm muscles in all the right places so that his fingers curl inwards. But he can do more than make a fist: Using the one-of-a-kind system, he’s learned to shred a video game guitar, pour objects from a bottle, and pick up a phone.




Card swiping is the most impressive movement right now,” says Herb Bresler, a senior researcher at Battelle. “It demonstrates fine grip as well as coarse hand movements.”

If Burkhart can swipe credit cards after a year, he might play the piano after five—that’s how long similar chips have lasted—because he and the computer have been learning from each other.

But the implant will stop collecting signals in July when it’s removed, even if the chip is still providing good data, because the clinical trial was structured for a two-year period.

In those two years, the computer trained itself on Burkhart’s thoughts, learning which signals translate to what movements, while he figured out how to make commands more clearly (often with the help of visual cues).

That’s the real achievement here. We’ve shown we know how to process the data,” says Bresler. “The chip is a limiting factor. We need to work on new ways of collecting brain signals.

Though similar neuroprosthetics have been helpful in reducing tremors in Parkinson’s patients, they still have a ways to go.

Besides the serious, invasive surgery, there’s always a chance the body will reject an array, blocking any attempts to record and transmit brain signals while ensuring you get patted down at every airport security scanner, forever. “Something will replace this array,” says Bresler.

Future signal collection devices will cover a larger area of the brain and be less invasive.”

Drawbacks aside, the electrode sleeve and decoding software wouldn’t be where they are today without the array driving them.

With improved collection devices, these products could eventually help stroke victims recover by reteaching their brain to use their limbs, while quadriplegics could mount similar systems on their wheelchairs.

At the very least, the neuroprosthetic experiment suggests that in the future, paralysis might not mean dependence—and that deserves a fist bump.

Please like, share and tweet this article.

Pass it on: Popular Science

How The Media Gets AI Alarmingly Wrong

In June of last year, five researchers at Facebook’s Artificial Intelligence Research unit published an article showing how bots can simulate negotiation-like conversations.

While for the most part the bots were able to maintain coherent dialogue, the researchers found that the software agents would occasionally generate strange sentences like: “Balls have zero to me to me to me to me to me to me to me to.

On seeing these results, the team realized that they had failed to include a constraint that limited the bots to generating sentences within the parameters of spoken English, meaning that they developed a type of machine-English patois to communicate between themselves.

These findings were considered to be fairly interesting by other experts in the field, but not totally surprising or groundbreaking.

A month after this initial research was released, Fast Company published an article entitled AI Is Inventing Language Humans Can’t Understand. Should We Stop It?




The story focused almost entirely on how the bots occasionally diverged from standard English – which was not the main finding of the paper – and reported that after the researchers “realized their bots were chattering in a new language” they decided to pull the plug on the whole experiment, as if the bots were in some way out of control.

The ice of AI’s first winter only fully retreated at the beginning of this decade after a new generation of researchers started publishing papers about successful applications of a technique called “deep learning”.

While this was fundamentally a decades-old statistical technique similar to Rosenblatt’s perceptron, increases in computational power and availability of huge data sets meant that deep learning was becoming practical for tasks such as speech recognition, image recognition and language translation.

As reports of deep learning’s “unreasonable effectiveness” circulated among researchers, enrollments at universities in machine-learning classes surged, corporations started to invest billions of dollars to find talent familiar with the newest techniques, and countless startups attempting to apply AI to transport or medicine or finance were founded.

As this resurgence got under way, AI hype in the media resumed after a long hiatus.

In 2013, John Markoff wrote a feature in the New York Times about deep learning and neural networks with the headline Brainlike Computers, Learning From Experience.

Not only did the title recall the media hype of 60 years earlier, so did some of the article’s assertions about what was being made possible by the new technology.

Since then, far more melodramatic and overblown articles about “AI apocalypse”, “artificial brains”, “artificial superintelligence” and “creepy Facebook bot AIs” have filled the news feed daily.

Please like, share and tweet this article.

Pass ot on: Popular Science

Your Child’s New Robot buddy Makes Reading A Lot More Fun

Children are more enthusiastic about reading aloud when they do so to a specially designed robot, psychologists have found.

A team at the University of Wisconsin-Madison built the robot, named Minnie, as a “reading buddy” to children aged 10 to 12.

Over two weeks children became more excited about books and attached to the robot. “After one interaction the kids were generally telling us that it was nice to have someone to read with,” Joseph Michaelis, who led the study, said.




By the end of two weeks they’re talking about how the robot was funny and silly and how they’d come home looking forward to seeing it.”

The research, published in the journal Science Robotics, is the latest in the effort to design machines that may augment learning or provide companionship.

In America, a robot with artificial intelligence acts as a “social mediator” for autistic children, allowing them to communicate with the wider world.

Minnie stayed with study subjects for two weeks.

Minnie, which is 33cm (13in) high, tracked the children’s progress in reading and every few pages reacted with a programmed comment. During a frightening chapter, for instance, it could say: “Oh, wow, I’m really scared.”

It also recommended books, taking into account ability and interests, and most said that it made good choices.

Please like, share and tweet this article.

Pass it on: Popular Science

Facebook And NYU Want To Use AI To Make MRI Exams 10 Times Faster

MRI scans may some day be available for a lot more people in need.

Facebook on Monday said it’s teaming up with NYU School of Medicine’s Department of Radiology to launch “fastMRI,” a collaborative research project that aims to use artificial intelligence to make MRI — magnetic resonance imaging that is 10 times faster.

Doctors and radiologists use MRI scanners to produce images that show in detail a patient’s organs, blood vessels, bones, soft issues and such, which helps doctors diagnose problems.

However, completing a MRI scan can take from 15 minutes to over an hour, according to Facebook’s blog post.

That’s challenging for children and patients in a lot of pain, who can’t lie still for a long time. It also limits how many scans the hospital can do in a day.




If the project succeeds, MRI scans could be completed in about five minutes, thus making time for more people in need to receive scans.

The idea is to actually capture less data during MRI scans, making them faster, and then use AI to “fill in views omitted from the accelerated scan,” Facebook said in its blog post. The challenge is doing this without missing any important details.

Facebook Artificial Intelligence Research, or FAIR, will work with NYU medical researchers to train artificial neural networks to recognize the structures of human body.

The project will use image data from 10,000 clinical cases with roughly 3 million MRIs of the knee, brain and liver. Patients’ names and medical information aren’t included.

We hope one day that because of this project, MRI will be able to replace a x-rays for many applications, also leading to decreased radiation exposure to patients,” said Michael Recht, MD, chair of department of radiology at NYU School of Medicine, in an email statement.

Our collaboration is one between academia and industry in which we can leverage our complementary strengths to achieve a real-world result.

Please like, share and tweet this article.

Pass it on: Popular Science

Aliens May Actually Be Billion-Year-Old Robots

This could ruin a lot of good science fiction movies … and create interesting plots for the next generation of them, not to mention influencing how humans deal with space aliens when they first encounter each other.

A timely article by The Daily Galaxy reviews the study “Alien Minds” by Susan Schneider where the professor and author discusses her theory that our first meeting with an extraterrestrial will be with a billion-year-old robot. Wait, what?

“I do not believe that most advanced alien civilizations will be biological. The most sophisticated civilizations will be postbiological, forms of artificial intelligence or alien superintelligence.”

Susan Schneider is an associate Professor in the Department of Philosophy Cognitive Science Program at the University of Connecticut.

Alien Minds” has been presented at NASA and the 2016 IdeaFestival in Kentucky and was published in The Impact of Discovering Life Beyond Earth.




It is her response to the question: “How would intelligent aliens think? Would they have conscious experiences? Would it feel a certain way to be an alien?

“I actually think the first discovery of life on other planets will probably involve microbial life; I am concentrating on intelligent life in my work on this topic though. I only claim that the most advanced civilizations will likely be post biological.”

Schneider’s theory is based on three components or “observations.”

In her “short window observation,” she presents the idea that a civilization or species that can conquer long-distance space travel is already very close to moving from biological to artificially-intelligent beings.

An example of this “short window” is the relatively brief 120 years it took humans to go from the first radio signals to cell phones.

Some of those species will be much older than us, which is Schneider’s “the greater age of alien civilizations” observation – one accepted by many.

And not just a few generations older but billions of years beyond us, making them far more advanced and intelligent. How much more?

Schneider’s last observation is that any species that can travel to Earth will be intelligent enough to develop robots that they can upload their brains

to. The robots would probably be silicon-based for speed of ‘thinking’ and durability, making them nearly immortal.

Please like, share and tweet this article.

Pass it on: Popular Science