Tag: AI

China Are Using Facial Recognition AI That Can Scan The Country’s Entire Population In A Second

Across China, facial-recognition technology that can scan the country’s entire population is being put to use. In some cases, the technology can perform the task in just one second.

Sixteen cities, municipalities, and provinces are using a frighteningly fast surveillance system that has an accuracy rate of 99.8%, Global Times reported over the weekend.

The system is fast enough to scan China’s population in just one second, and it takes two seconds to scan the world’s population,” the Times reported, citing local Chinese newspaper Worker’s Daily.

The system is part of Skynet, a nationwide monitoring program launched in 2005 to increase the use and capabilities of surveillance cameras.




According to developers, this particular system works regardless of angle or lighting condition and over the last two years has led to the arrest of more than 2,000 people.

The use of facial-recognition technology is soaring in China where it is being used to increase efficiencies and improve policing.

Cameras are used to catch jaywalkers, find fugitives, track people’s regular hangouts, and even predict crime before it happens.

Currently, there are 170 million surveillance cameras in China and, by 2020, the country hopes to have 570 million — that’s nearly one camera for every two citizens.

Facial recognition technology is just a small part of the artificial intelligence industry that China wants to pioneer.

According to a report by CB Insights, five times as many AI patents were applied for in China than the US in 2017.

And, for the first time, China’s AI scene gained more investment than that of the US last year. Of every available dollar going to AI startups around the world, nearly half went to companies in China.

Please like, share and tweet this article.

Pass it on: New Scientist

This Robot Dog Can Recover From a Vicious Kick Using Artificial Intelligence

Researchers at ETH Zurich in Switzerland taught a four-legged robot dog a valuable life skill: how to get up again after it gets knocked down. And yes, it involved evil scientists kicking and shoving an innocent robot.

The researchers used an AI model to teach ANYmal, a doglike robot made by ANYbotics, how to right itself after being knocked onto its side or back in a variety of physical environments — as opposed to giving the robot a detailed set of instructions for only one specific environment.




But It Gets Up Again

The results were published in a paper today on Science Robotics. In simple terms, the robot tried again and again to right itself in simulation, and learned from instances when a movement didn’t end up righting it.

It then took what it learned and applied it to the real world.

It even learned how to run faster than it could before. Thanks to the neural network, ANYmal was also able to reach 1.5 metres per second or just over three mph in mere hours, according to a report.

Never Gonna Keep It Down

Are we inching closer to a future where robot guard dogs chase us down to exact revenge on us, as seen on Netflix’s Black Mirror? Sure looks like it.

So perhaps it’s time to stop kicking robot dogs — before we know it, they’ll start learning how to protect themselves.

Please like, share and tweet this article.

Pass it on: New Scientist

New AI Can Spot Tell-Tale Signs Of A Genetic Disorder By Scanning People’s Faces

An artificially intelligent computer program has been used to identify rare genetic diseases by studying photos of faces.

In the experiment, the AI system out-performed human experts attempting the same task.

The face analysis program, known as DeepGestalt, could in future assist the diagnosis of rare genetic syndromes, say researchers.

At the same time they warn that safeguards are needed to prevent abuse of the technology.

Easily accessible portrait photos could, for instance, enable potential employers to discriminate against individuals with ‘at risk’ facial features.

Study co-author Dr Karen Gripp, from the US company FDNA which developed the program, said: “This is a long-awaited breakthrough in medical genetics that has finally come to fruition.

With this study, we’ve shown that adding an automated facial analysis framework, such as DeepGestalt, to the clinical workflow can help achieve earlier diagnosis and treatment, and promise an improved quality of life.




The team trained the “deep learning” software using more than 17,000 facial images of patients with more than 200 different genetic disorders.

In subsequent tests DeepGestalt successfully included the correct syndrome in its top 10 list of suggestions 91 percent of the time.

The system also out-performed clinical experts in three separate trials. Many genetic disorders are associated with distinct facial features.

Some are easily recognizable while others are harder to spot.

People with Williams syndrome, for instance, have short, upturned noses and mouths, a small jaw and a large forehead.

Well-known features associated with Down’s syndrome include almond-shaped eyes, a round, flat face, and a small nose and mouth.

Yaron Gurovich, chief technology office at FDNA and first author of the research published in the journal Nature Medicine, said: “The increased ability to describe phenotype in a standardized way opens the door to future research and applications, and the identification of new genetic syndromes.

Writing in the journal, the researchers drew attention to the potential risk of abuse of the technology.

They warned: “Unlike genomic data, facial images are easily accessible.

“Payers or employers could potentially analyse facial images and discriminate based on the probability of individuals having pre-existing conditions or developing medical complications.”

Please like, share and tweet this article.

Pass it on: New Scientist

A Deodorant Maker is Using Machine Learning to Detect Your B.O.

Unilever — that’s the owner of prominent deodorant makers Axe and Dove — has teamed up with an all-star squad of academics and electronics manufacturers to create a machine learning-powered gadget that’ll tell you if you have body odor.

That’s according to a detailed story in the magazine IEEE Spectrum about Unilever’s work with chipmaker Arm, electronics firm PragmatIC, and researchers at the University of Manchester.

They aim to use some of the most advanced artificial intelligence (AI) and sensor technology in the world — to tell you whether you smell bad.




Smell-O-Vision

The gadget will take the form of a thin plastic strip, according to IEEE, with a tiny processor and an array of organic semiconductors that detect “gaseous analytes” — chemical signs, apparently, that you’re giving off a nasty pong.

And because those gaseous analytes are complex, the system will employ machine learning to analyze the data and decide whether it’s time for a fresh misting of the “hot chocolate” and “red peppercorn notes” from Axe’s Dark Temptation XL Body Spray.

Food Waste

The technology wouldn’t just provide relief for your family and coworkers. It could also potentially evaluate food freshness, according to IEEE — possibly cutting into the 1.3 billion tons of food that went to waste in 2016.

And, to be fair, it also represents a step forward for AI and sensor technology, which have become adept at recognizing sights and sounds but struggled to categorize smells.

Please like, share and tweet this article.

Pass it on: New Scientist

Robots Will Know They’ve Been Blasted With a Shotgun

Light fibers in the silicone foam allow an AI system to detect how it’s being manipulated.

Soft robots could soon be everywhere: the squishy, malleable buggers might lead search and rescue missions, administer medication to specific organs, and maybe even crawl up your butt.

And now, soft robots will know how and when they’ve been bent out of shape — or shot full of holes by Arnold Schwarzenegger.

The trick is to simulate an animal’s peripheral nervous system with a network of fiber optic cables, according to research published Wednesday in the journal Science Robotics.

The Cornell University scientists behind the project hope that the tech could be used to build robots with a sense of whether they’ve been damaged.




Light Show

As the fiber optic cables, encased in a block of smart foam, bend and twist, the pattern and density of the light traveling through them changes in specific ways.

But the differences in light among various movements and manipulations are too minute for a human spot, so the researchers trained a machine learning algorithm to analyze the shifts.

The AI system was trained to track how the light traveling through the fiber optic cables changed based on how researchers bent the foam.

Once it picked up on the patterns, according to the research, the machine learning algorithm could predict the type of bend with 100 percent accuracy — it always knew whether the foam was bent up, down, left, right, or the direction in which it had been twisted.

The whole experimental set-up.

From there, the system could guess the extent to which it had been bent or twist within a margin of 0.06 degrees.

Baby Steps

Someday, technology like this fiber optic network might give rise to robots that could teach themselves to walk, the researchers said.

With this new form of high-tech proprioception, the sense that lets us determine where our limbs are in space without looking, futuristic robots may be able to keep track of their own shape, detect when they’ve been damaged, and better understand their surroundings.

Please like, share and tweet this article.

Pass it on: Popular Science

Walmart Agrees To Work With Ford On Self-Driving Grocery Deliveries

Ford is working with Postmates and Walmart on a pilot program for self-driving grocery deliveries, the companies announced on Wednesday.

We are exploring how self-driving vehicles can deliver many everyday goods such as groceries, diapers, pet food and personal care items,” Ford said in a press release.

The grocery delivery pilot experiment will be based in Miami, where Ford’s self-driving car company, Argo, is already testing self-driving vehicles. Ford had been testing self-driving deliveries with Postmates prior to this announcement.

Like most car companies, Ford is racing to develop fully autonomous vehicle technology. But Ford has been more proactive than most of its competitors in exploring the non-technical aspects of a self-driving car service.

Last year, I got to sit in the seat suit of a fake self-driving car Ford was using to test pedestrian reactions to self-driving car technology.

Ford also experimented with delivering pizzas with mock-driverless vehicles in a partnership with Dominos.




Ford’s collaboration with Postmates over the last few months has been focused on figuring out the best way for customers to interact with a delivery vehicle.

Driverless cars won’t have a driver to carry deliveries to the customer’s door, so self-driving vehicles will need some kind of locker that customers can open to remove their merchandise.

Ford has been experimenting with multi-locker delivery vans, allowing its cars to serve multiple customers on a single trip—without worrying about one customer swiping another’s deliveries.

Ford also announced last month that Washington, DC would be the second city where Argo will be preparing to launch a commercial service in 2021 (in addition to Miami).

Ford has worked hard to cultivate relationships with local government officials, with Mayor Muriel Bowser attending last month’s announcement on DC’s waterfront.

Ford is betting that all of these preparations will help the company scale up quickly once its self-driving technology is ready.

That’s important because Ford appears to be significantly behind the market leaders: Waymo (which is aiming to launch a commercial service this year) and GM’s Cruise (aiming to launch in 2019).

But it’s also a risky strategy because if Argo’s technology isn’t ready on time, then all of Ford’s careful planning could turn out to be wasted effort.

Please like, share and tweet this article.

Pass it on: Popular Science

TSA Outlines Its Plans For Facial Recognition On Domestic Flights

The Transportation Security Administration (TSA) is determined to make facial recognition and other biometrics a regular part of the airport experience, and it now has a roadmap for that expansion.

The effort will start by teaming with Customs and Border Protection on biometric security for international travel, followed by putting the technology into use for TSA Precheck travelers to speed up their boarding process.

After that, it would both devise an “opt-in” biometric system for ordinary domestic passengers and flesh out a deeper infrastructure.




While this will include technology like fingerprint readers (primarily for trusted passengers), face identification will remain the “primary means” of verifying identities, the TSA said. As such, you can expect facial recognition to play a major role.

To some extent, the roadmap is already in progress. You can find the TSA testing fingerprint technology for Precheck users in Atlanta’s Hartsfield-Jackson airport, while Delta is poised to deploy a facial recognition terminal at the same location later in October.

The Administration’s roadmap sets far loftier goals, though. It sees facial recognition and other biometrics reducing the need for “high friction” documents like passports in addition to bolstering security.

There’s no firm timeline, however, and the roadmap only hints at addressing ethical issues like privacy in later studies.

That may prove to be one of the central obstacles to a wider implementation. How will the TSA ensure that face data isn’t misused or falls into the wrong hands, for instance?

And will it do enough to prevent false positives that would ensnare innocent people? Until the TSA addresses issues like those, its dreams of widespread biometrics might not become real.

Please like, share and tweet this articel.

Pass it on: Popular Science

Let Gmail Finish Your Sentences

Google’s new machine-learning tools for its mail service can save you time and typos — as long as you are comfortable sharing your thoughts with the software.

In theory, the Smart Compose tool can speed up your message composition and cut down on typographical errors.

While “machine learning” means the software (and not a human) is scanning your work-in-progress to get information for the predictive text function, you are sharing information with Google when you use its products.

If you have already updated to the new version of Gmail, you can try out Smart Compose by going to the General tab in Settings and turning on the check box next to enable Experimental Access.

Next, click Save Changes at the bottom of the Settings screen.




When you return to the General tab of the Gmail settings, scroll down to the newly arrived Smart Compose section and confirm that “Writing suggestions on” is enabled.

If you do not care for Google’s assistance after sampling the feature, you can return to the settings and click “Writing suggestions off” to disable Smart Compose.

Once you enable it in the settings, Gmail’s new Smart Compose feature can finish your sentences for you as you type.

The new feature is available only for English composition at the moment, and a disclaimer from Google warns: “Smart Compose is not designed to provide answers and may not always predict factually correct information.”

Google also warns that experimental tools like Smart Compose are still under development and that the company may change or remove the features at any time.

Please like, share and tweet this article.

Pass it on: Popular Science

A Brain Implant Brings a Quadriplegic’s Arm Back to Life

Ian Burkhart lifts a peg using his paralyzed right arm, thanks to a machine interface that can read his thoughts and execute them on his behalf.

Ian Burkhart has been a cyborg for two years now. In 2014, scientists at Ohio State’s Neurological Institute implanted a pea-sized microchip into the 24-year-old quadriplegic’s motor cortex.

Its goal: to bypass his damaged spinal cord and, with the help of a signal decoder and electrode-packed sleeve, control his right arm with his thoughts. Cue the transhumanist cheers!

Neuroengineers have been developing these so-called brain-computer interfaces for more than a decade.

They’ve used readings from brain implants to help paralyzed patients play Pong on computer screens and control robotic arms. But Burkhart is the first patient who’s been able to use his implant to control his *actual *arm.

Over the past 15 months, researchers at the Ohio State University Wexner Medical Center and engineers from Battelle, the medical group that developed the decoder software and electrode sleeve, have helped Burkhart relearn fine motor skills with weekly training sessions.

In a paper in *Nature, *they describe hooking a cable from the port screwed into Burkhart’s skull (where the chip is) to a computer that translates the brain signals into instructions for the sleeve, which stimulates his muscles into moving his wrist and fingers.

When Burkhart thinks “clench fist,” for example, the implanted electrodes record the activity in his motor cortex.

Those signals are decoded in real-time, jolting his arm muscles in all the right places so that his fingers curl inwards. But he can do more than make a fist: Using the one-of-a-kind system, he’s learned to shred a video game guitar, pour objects from a bottle, and pick up a phone.




Card swiping is the most impressive movement right now,” says Herb Bresler, a senior researcher at Battelle. “It demonstrates fine grip as well as coarse hand movements.”

If Burkhart can swipe credit cards after a year, he might play the piano after five—that’s how long similar chips have lasted—because he and the computer have been learning from each other.

But the implant will stop collecting signals in July when it’s removed, even if the chip is still providing good data, because the clinical trial was structured for a two-year period.

In those two years, the computer trained itself on Burkhart’s thoughts, learning which signals translate to what movements, while he figured out how to make commands more clearly (often with the help of visual cues).

That’s the real achievement here. We’ve shown we know how to process the data,” says Bresler. “The chip is a limiting factor. We need to work on new ways of collecting brain signals.

Though similar neuroprosthetics have been helpful in reducing tremors in Parkinson’s patients, they still have a ways to go.

Besides the serious, invasive surgery, there’s always a chance the body will reject an array, blocking any attempts to record and transmit brain signals while ensuring you get patted down at every airport security scanner, forever. “Something will replace this array,” says Bresler.

Future signal collection devices will cover a larger area of the brain and be less invasive.”

Drawbacks aside, the electrode sleeve and decoding software wouldn’t be where they are today without the array driving them.

With improved collection devices, these products could eventually help stroke victims recover by reteaching their brain to use their limbs, while quadriplegics could mount similar systems on their wheelchairs.

At the very least, the neuroprosthetic experiment suggests that in the future, paralysis might not mean dependence—and that deserves a fist bump.

Please like, share and tweet this article.

Pass it on: Popular Science

How The Media Gets AI Alarmingly Wrong

In June of last year, five researchers at Facebook’s Artificial Intelligence Research unit published an article showing how bots can simulate negotiation-like conversations.

While for the most part the bots were able to maintain coherent dialogue, the researchers found that the software agents would occasionally generate strange sentences like: “Balls have zero to me to me to me to me to me to me to me to.

On seeing these results, the team realized that they had failed to include a constraint that limited the bots to generating sentences within the parameters of spoken English, meaning that they developed a type of machine-English patois to communicate between themselves.

These findings were considered to be fairly interesting by other experts in the field, but not totally surprising or groundbreaking.

A month after this initial research was released, Fast Company published an article entitled AI Is Inventing Language Humans Can’t Understand. Should We Stop It?




The story focused almost entirely on how the bots occasionally diverged from standard English – which was not the main finding of the paper – and reported that after the researchers “realized their bots were chattering in a new language” they decided to pull the plug on the whole experiment, as if the bots were in some way out of control.

The ice of AI’s first winter only fully retreated at the beginning of this decade after a new generation of researchers started publishing papers about successful applications of a technique called “deep learning”.

While this was fundamentally a decades-old statistical technique similar to Rosenblatt’s perceptron, increases in computational power and availability of huge data sets meant that deep learning was becoming practical for tasks such as speech recognition, image recognition and language translation.

As reports of deep learning’s “unreasonable effectiveness” circulated among researchers, enrollments at universities in machine-learning classes surged, corporations started to invest billions of dollars to find talent familiar with the newest techniques, and countless startups attempting to apply AI to transport or medicine or finance were founded.

As this resurgence got under way, AI hype in the media resumed after a long hiatus.

In 2013, John Markoff wrote a feature in the New York Times about deep learning and neural networks with the headline Brainlike Computers, Learning From Experience.

Not only did the title recall the media hype of 60 years earlier, so did some of the article’s assertions about what was being made possible by the new technology.

Since then, far more melodramatic and overblown articles about “AI apocalypse”, “artificial brains”, “artificial superintelligence” and “creepy Facebook bot AIs” have filled the news feed daily.

Please like, share and tweet this article.

Pass ot on: Popular Science