Tag: AI

Robots Will Know They’ve Been Blasted With a Shotgun

Light fibers in the silicone foam allow an AI system to detect how it’s being manipulated.

Soft robots could soon be everywhere: the squishy, malleable buggers might lead search and rescue missions, administer medication to specific organs, and maybe even crawl up your butt.

And now, soft robots will know how and when they’ve been bent out of shape — or shot full of holes by Arnold Schwarzenegger.

The trick is to simulate an animal’s peripheral nervous system with a network of fiber optic cables, according to research published Wednesday in the journal Science Robotics.

The Cornell University scientists behind the project hope that the tech could be used to build robots with a sense of whether they’ve been damaged.




Light Show

As the fiber optic cables, encased in a block of smart foam, bend and twist, the pattern and density of the light traveling through them changes in specific ways.

But the differences in light among various movements and manipulations are too minute for a human spot, so the researchers trained a machine learning algorithm to analyze the shifts.

The AI system was trained to track how the light traveling through the fiber optic cables changed based on how researchers bent the foam.

Once it picked up on the patterns, according to the research, the machine learning algorithm could predict the type of bend with 100 percent accuracy — it always knew whether the foam was bent up, down, left, right, or the direction in which it had been twisted.

The whole experimental set-up.

From there, the system could guess the extent to which it had been bent or twist within a margin of 0.06 degrees.

Baby Steps

Someday, technology like this fiber optic network might give rise to robots that could teach themselves to walk, the researchers said.

With this new form of high-tech proprioception, the sense that lets us determine where our limbs are in space without looking, futuristic robots may be able to keep track of their own shape, detect when they’ve been damaged, and better understand their surroundings.

Please like, share and tweet this article.

Pass it on: Popular Science

Walmart Agrees To Work With Ford On Self-Driving Grocery Deliveries

Ford is working with Postmates and Walmart on a pilot program for self-driving grocery deliveries, the companies announced on Wednesday.

We are exploring how self-driving vehicles can deliver many everyday goods such as groceries, diapers, pet food and personal care items,” Ford said in a press release.

The grocery delivery pilot experiment will be based in Miami, where Ford’s self-driving car company, Argo, is already testing self-driving vehicles. Ford had been testing self-driving deliveries with Postmates prior to this announcement.

Like most car companies, Ford is racing to develop fully autonomous vehicle technology. But Ford has been more proactive than most of its competitors in exploring the non-technical aspects of a self-driving car service.

Last year, I got to sit in the seat suit of a fake self-driving car Ford was using to test pedestrian reactions to self-driving car technology.

Ford also experimented with delivering pizzas with mock-driverless vehicles in a partnership with Dominos.




Ford’s collaboration with Postmates over the last few months has been focused on figuring out the best way for customers to interact with a delivery vehicle.

Driverless cars won’t have a driver to carry deliveries to the customer’s door, so self-driving vehicles will need some kind of locker that customers can open to remove their merchandise.

Ford has been experimenting with multi-locker delivery vans, allowing its cars to serve multiple customers on a single trip—without worrying about one customer swiping another’s deliveries.

Ford also announced last month that Washington, DC would be the second city where Argo will be preparing to launch a commercial service in 2021 (in addition to Miami).

Ford has worked hard to cultivate relationships with local government officials, with Mayor Muriel Bowser attending last month’s announcement on DC’s waterfront.

Ford is betting that all of these preparations will help the company scale up quickly once its self-driving technology is ready.

That’s important because Ford appears to be significantly behind the market leaders: Waymo (which is aiming to launch a commercial service this year) and GM’s Cruise (aiming to launch in 2019).

But it’s also a risky strategy because if Argo’s technology isn’t ready on time, then all of Ford’s careful planning could turn out to be wasted effort.

Please like, share and tweet this article.

Pass it on: Popular Science

TSA Outlines Its Plans For Facial Recognition On Domestic Flights

The Transportation Security Administration (TSA) is determined to make facial recognition and other biometrics a regular part of the airport experience, and it now has a roadmap for that expansion.

The effort will start by teaming with Customs and Border Protection on biometric security for international travel, followed by putting the technology into use for TSA Precheck travelers to speed up their boarding process.

After that, it would both devise an “opt-in” biometric system for ordinary domestic passengers and flesh out a deeper infrastructure.




While this will include technology like fingerprint readers (primarily for trusted passengers), face identification will remain the “primary means” of verifying identities, the TSA said. As such, you can expect facial recognition to play a major role.

To some extent, the roadmap is already in progress. You can find the TSA testing fingerprint technology for Precheck users in Atlanta’s Hartsfield-Jackson airport, while Delta is poised to deploy a facial recognition terminal at the same location later in October.

The Administration’s roadmap sets far loftier goals, though. It sees facial recognition and other biometrics reducing the need for “high friction” documents like passports in addition to bolstering security.

There’s no firm timeline, however, and the roadmap only hints at addressing ethical issues like privacy in later studies.

That may prove to be one of the central obstacles to a wider implementation. How will the TSA ensure that face data isn’t misused or falls into the wrong hands, for instance?

And will it do enough to prevent false positives that would ensnare innocent people? Until the TSA addresses issues like those, its dreams of widespread biometrics might not become real.

Please like, share and tweet this articel.

Pass it on: Popular Science

Let Gmail Finish Your Sentences

Google’s new machine-learning tools for its mail service can save you time and typos — as long as you are comfortable sharing your thoughts with the software.

In theory, the Smart Compose tool can speed up your message composition and cut down on typographical errors.

While “machine learning” means the software (and not a human) is scanning your work-in-progress to get information for the predictive text function, you are sharing information with Google when you use its products.

If you have already updated to the new version of Gmail, you can try out Smart Compose by going to the General tab in Settings and turning on the check box next to enable Experimental Access.

Next, click Save Changes at the bottom of the Settings screen.




When you return to the General tab of the Gmail settings, scroll down to the newly arrived Smart Compose section and confirm that “Writing suggestions on” is enabled.

If you do not care for Google’s assistance after sampling the feature, you can return to the settings and click “Writing suggestions off” to disable Smart Compose.

Once you enable it in the settings, Gmail’s new Smart Compose feature can finish your sentences for you as you type.

The new feature is available only for English composition at the moment, and a disclaimer from Google warns: “Smart Compose is not designed to provide answers and may not always predict factually correct information.”

Google also warns that experimental tools like Smart Compose are still under development and that the company may change or remove the features at any time.

Please like, share and tweet this article.

Pass it on: Popular Science

A Brain Implant Brings a Quadriplegic’s Arm Back to Life

Ian Burkhart lifts a peg using his paralyzed right arm, thanks to a machine interface that can read his thoughts and execute them on his behalf.

Ian Burkhart has been a cyborg for two years now. In 2014, scientists at Ohio State’s Neurological Institute implanted a pea-sized microchip into the 24-year-old quadriplegic’s motor cortex.

Its goal: to bypass his damaged spinal cord and, with the help of a signal decoder and electrode-packed sleeve, control his right arm with his thoughts. Cue the transhumanist cheers!

Neuroengineers have been developing these so-called brain-computer interfaces for more than a decade.

They’ve used readings from brain implants to help paralyzed patients play Pong on computer screens and control robotic arms. But Burkhart is the first patient who’s been able to use his implant to control his *actual *arm.

Over the past 15 months, researchers at the Ohio State University Wexner Medical Center and engineers from Battelle, the medical group that developed the decoder software and electrode sleeve, have helped Burkhart relearn fine motor skills with weekly training sessions.

In a paper in *Nature, *they describe hooking a cable from the port screwed into Burkhart’s skull (where the chip is) to a computer that translates the brain signals into instructions for the sleeve, which stimulates his muscles into moving his wrist and fingers.

When Burkhart thinks “clench fist,” for example, the implanted electrodes record the activity in his motor cortex.

Those signals are decoded in real-time, jolting his arm muscles in all the right places so that his fingers curl inwards. But he can do more than make a fist: Using the one-of-a-kind system, he’s learned to shred a video game guitar, pour objects from a bottle, and pick up a phone.




Card swiping is the most impressive movement right now,” says Herb Bresler, a senior researcher at Battelle. “It demonstrates fine grip as well as coarse hand movements.”

If Burkhart can swipe credit cards after a year, he might play the piano after five—that’s how long similar chips have lasted—because he and the computer have been learning from each other.

But the implant will stop collecting signals in July when it’s removed, even if the chip is still providing good data, because the clinical trial was structured for a two-year period.

In those two years, the computer trained itself on Burkhart’s thoughts, learning which signals translate to what movements, while he figured out how to make commands more clearly (often with the help of visual cues).

That’s the real achievement here. We’ve shown we know how to process the data,” says Bresler. “The chip is a limiting factor. We need to work on new ways of collecting brain signals.

Though similar neuroprosthetics have been helpful in reducing tremors in Parkinson’s patients, they still have a ways to go.

Besides the serious, invasive surgery, there’s always a chance the body will reject an array, blocking any attempts to record and transmit brain signals while ensuring you get patted down at every airport security scanner, forever. “Something will replace this array,” says Bresler.

Future signal collection devices will cover a larger area of the brain and be less invasive.”

Drawbacks aside, the electrode sleeve and decoding software wouldn’t be where they are today without the array driving them.

With improved collection devices, these products could eventually help stroke victims recover by reteaching their brain to use their limbs, while quadriplegics could mount similar systems on their wheelchairs.

At the very least, the neuroprosthetic experiment suggests that in the future, paralysis might not mean dependence—and that deserves a fist bump.

Please like, share and tweet this article.

Pass it on: Popular Science

How The Media Gets AI Alarmingly Wrong

In June of last year, five researchers at Facebook’s Artificial Intelligence Research unit published an article showing how bots can simulate negotiation-like conversations.

While for the most part the bots were able to maintain coherent dialogue, the researchers found that the software agents would occasionally generate strange sentences like: “Balls have zero to me to me to me to me to me to me to me to.

On seeing these results, the team realized that they had failed to include a constraint that limited the bots to generating sentences within the parameters of spoken English, meaning that they developed a type of machine-English patois to communicate between themselves.

These findings were considered to be fairly interesting by other experts in the field, but not totally surprising or groundbreaking.

A month after this initial research was released, Fast Company published an article entitled AI Is Inventing Language Humans Can’t Understand. Should We Stop It?




The story focused almost entirely on how the bots occasionally diverged from standard English – which was not the main finding of the paper – and reported that after the researchers “realized their bots were chattering in a new language” they decided to pull the plug on the whole experiment, as if the bots were in some way out of control.

The ice of AI’s first winter only fully retreated at the beginning of this decade after a new generation of researchers started publishing papers about successful applications of a technique called “deep learning”.

While this was fundamentally a decades-old statistical technique similar to Rosenblatt’s perceptron, increases in computational power and availability of huge data sets meant that deep learning was becoming practical for tasks such as speech recognition, image recognition and language translation.

As reports of deep learning’s “unreasonable effectiveness” circulated among researchers, enrollments at universities in machine-learning classes surged, corporations started to invest billions of dollars to find talent familiar with the newest techniques, and countless startups attempting to apply AI to transport or medicine or finance were founded.

As this resurgence got under way, AI hype in the media resumed after a long hiatus.

In 2013, John Markoff wrote a feature in the New York Times about deep learning and neural networks with the headline Brainlike Computers, Learning From Experience.

Not only did the title recall the media hype of 60 years earlier, so did some of the article’s assertions about what was being made possible by the new technology.

Since then, far more melodramatic and overblown articles about “AI apocalypse”, “artificial brains”, “artificial superintelligence” and “creepy Facebook bot AIs” have filled the news feed daily.

Please like, share and tweet this article.

Pass ot on: Popular Science

Facebook And NYU Want To Use AI To Make MRI Exams 10 Times Faster

MRI scans may some day be available for a lot more people in need.

Facebook on Monday said it’s teaming up with NYU School of Medicine’s Department of Radiology to launch “fastMRI,” a collaborative research project that aims to use artificial intelligence to make MRI — magnetic resonance imaging that is 10 times faster.

Doctors and radiologists use MRI scanners to produce images that show in detail a patient’s organs, blood vessels, bones, soft issues and such, which helps doctors diagnose problems.

However, completing a MRI scan can take from 15 minutes to over an hour, according to Facebook’s blog post.

That’s challenging for children and patients in a lot of pain, who can’t lie still for a long time. It also limits how many scans the hospital can do in a day.




If the project succeeds, MRI scans could be completed in about five minutes, thus making time for more people in need to receive scans.

The idea is to actually capture less data during MRI scans, making them faster, and then use AI to “fill in views omitted from the accelerated scan,” Facebook said in its blog post. The challenge is doing this without missing any important details.

Facebook Artificial Intelligence Research, or FAIR, will work with NYU medical researchers to train artificial neural networks to recognize the structures of human body.

The project will use image data from 10,000 clinical cases with roughly 3 million MRIs of the knee, brain and liver. Patients’ names and medical information aren’t included.

We hope one day that because of this project, MRI will be able to replace a x-rays for many applications, also leading to decreased radiation exposure to patients,” said Michael Recht, MD, chair of department of radiology at NYU School of Medicine, in an email statement.

Our collaboration is one between academia and industry in which we can leverage our complementary strengths to achieve a real-world result.

Please like, share and tweet this article.

Pass it on: Popular Science

Look Out, Wiki-Geeks. Now Google Trains AI To Write Wikipedia Articles

A team within Google Brain – the web giant’s crack machine-learning research lab – has taught software to generate Wikipedia-style articles by summarizing information on web pages… to varying degrees of success.

As we all know, the internet is a never ending pile of articles, social media posts, memes, joy, hate, and blogs. It’s impossible to read and keep up with everything.

Using AI to tell pictures of dogs and cats apart is cute and all, but if such computers could condense information down into useful snippets, that would be really be handy. It’s not easy, though.

A paper, out last month and just accepted for this year’s International Conference on Learning Representations (ICLR) in April, describes just how difficult text summarization really is.




A few companies have had a crack at it. Salesforce trained a recurrent neural network with reinforcement learning to take information and retell it in a nutshell, and the results weren’t bad.

However, the computer-generated sentences are simple and short; they lacked the creative flair and rhythm of text written by humans. Google Brain’s latest effort is slightly better: the sentences are longer and seem more natural.

Here’s an example for the topic: Wings over Kansas, an aviation website for pilots and hobbyists.

The paragraph on the left is a computer-generated summary of the organization, and the one on the right is taken from the Wikipedia page on the subject.

The model works by taking the top ten web pages of a given subject – excluding the Wikipedia entry – or scraping information from the links in the references section of a Wikipedia article.

Most of the selected pages are used for training, and a few are kept back to develop and test the system.

Also, since it relies on the popularity of the first ten websites on the internet for any particular topic, if those sites aren’t particularly credible, the resulting handiwork probably won’t be very accurate either.

You can’t trust everything you read online, of course.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s AI Sounds Like A Human On The Phone

It came as a total surprise: the most impressive demonstration at Google’s I/O conference yesterday was a phone call to book a haircut. Of course, this was a phone call with a difference.

It wasn’t made by a human, but by the Google Assistant, which did an uncannily good job of asking the right questions, pausing in the right places, and even throwing in the odd “mmhmm” for realism.

The crowd was shocked, but the most impressive thing was that the person on the receiving end of the call didn’t seem to suspect they were talking to an AI.

It’s a huge technological achievement for Google, but it also opens up a Pandora’s box of ethical and social challenges.




For example, does Google have an obligation to tell people they’re talking to a machine? Does technology that mimics humans erode our trust in what we see and hear?

And is this another example of tech privilege, where those in the know can offload boring conversations they don’t want to have to a machine, while those receiving the calls have to deal with some idiot robot?

In other words, it was a typical Google demo: equal parts wonder and worry.

Many experts working in this area agree, although how exactly you would tell someone they’re speaking to an AI is a tricky question.

If the Assistant starts its calls by saying “hello, I’m a robot” then the receiver is likely to hang up. More subtle indicators could mean limiting the realism of the AI’s voice or including a special tone during calls.

Google tells The Verge it hopes a set of social norms will organically evolve that make it clear when the caller is an AI.

Please like, share and tweet this article.

Pass it on: Popular Science

Facebook Researchers Use AI To Fix Blinking In Photos

Facebook seems to be going all in on building a strong AI-footprint for its platform and now the social media giant has published a new study which focuses on some AI-based tools that fix selfies ruined by blinking.

There are times when we try to take the perfect selfie and while the whole frame turns out exactly as we wanted, our eyes blink in between and ruin the picture.

This is where the new Facebook tool based on AI comes in as it can literally replace your closed eyes with an open pair just by studying as well as analysing your previous pictures.

The idea of opening closed eyes isn’t a new one in a portrait, however, the process involves pulling the source material from another photo directly and then transplanting it onto the blinking face.

Adobe has a similar but way more simplified software called Photoshop Elements that has a mode built for this purpose.




When you use Photoshop Elements, the program prompts you to pick another photo from the same session, since it assumes that you took more than one, in which the same person’s eyes are open.

It then uses Adobe’s AI tech, Sensei, in order to try and blend the open eyes from the previous image directly into the shot with the blink.

The Facebook AI tool is somewhat similar to this and there are also some small details which the Adobe’s software can’t always get right such as specific lighting conditions or directions of shadows.

On the other hand, the Facebook Research methodology, the same process of replacing closed eyes with open ones is dependent on a deep neural network that will supply the missing data while using the context of the area present around the closed eyes.

The Facebook AI tool is somewhat similar to this and there are also some small details which the Adobe’s software can’t always get right such as specific lighting conditions or directions of shadows.

On the other hand, the Facebook Research methodology, the same process of replacing closed eyes with open ones is dependent on a deep neural network that will supply the missing data while using the context of the area present around the closed eyes.

This is done by the general adversarial network (GAN) which is a similar technology that used in deep fake videos in which a person’s face is swapped to another person’s body.

GAN then uses the data points present on other images of the same person for reference in order to fill the data needed.

The Facebook AI tool will also use identifying marks in order to help in generating the substitute data.

After that, a process called in-painting will start in order to come up with all the necessary data which will be used for eyelids with actual eyes and this exactly where the hard work of GAN system comes into play as it will now need more than one image of the person in order to use them as a reference, and try to not miss out on any detail.

When will Facebook introduce this tool to the masses? Will it be launched as a new feature for its social media platforms?

Questions like these are in many, however, the Facebook AI tool is still in development stages and only time will tell us what kind of revolution will the GAN system being to the world of selfies.

Please like, share and tweet this article.

Pass it on: Popular Science