Tag: artificial intelligence

A Self-driving Uber In Arizona Kills A Woman In First Fatal Crash Involving Pedestrian

An autonomous Uber car killed a woman in the street in Arizona, police said, in what appears to be the first reported fatal crash involving a self-driving vehicle and a pedestrian in the US.

Tempe police said the self-driving car was in autonomous mode at the time of the crash and that the vehicle hit a woman, who was walking outside of the crosswalk and later died at a hospital.

There was a vehicle operator inside the car at the time of the crash.

Uber said in a statement on Twitter: “Our hearts go out to the victim’s family. We are fully cooperating with local authorities in their investigation of this incident.” A spokesman declined to comment further on the crash.

The company said it was pausing its self-driving car operations in Phoenix, Pittsburgh, San Francisco and Toronto.

Dara Khosrowshahi, Uber’s CEO, tweeted: “Some incredibly sad news out of Arizona. We’re thinking of the victim’s family as we work with local law enforcement to understand what happened.

Uber has been testing its self-driving cars in numerous states and temporarily suspended its vehicles in Arizona last year after a crash involving one of its vehicles, a Volvo SUV.

When the company first began testing its self-driving cars in California in 2016, the vehicles were caught running red lights, leading to a high-profile dispute between state regulators and the San Francisco-based corporation.

Police identified the victim as 49-year-old Elaine Herzberg and said she was walking outside of the crosswalk with a bicycle when she was hit at around 10pm on Sunday. Images from the scene showed a damaged bike.

The 2017 Volvo SUV was traveling at roughly 40 miles an hour, and it did not appear that the car slowed down as it approached the woman, said Tempe sergeant Ronald Elcock.

Elcock said he had watched footage of the collision, which has not been released to the public. He also identified the operator of the car as Rafael Vasquez, 44, and said he was cooperative and there were no signs of impairment.

The self-driving technology is supposed to detect pedestrians, cyclists and others and prevent crashes.

John M Simpson, privacy and technology project director with Consumer Watchdog, said the collision highlighted the need for tighter regulations of the nascent technology.

The robot cars cannot accurately predict human behavior, and the real problem comes in the interaction between humans and the robot vehicles,” said Simpson, whose advocacy group called for a national moratorium on autonomous car testing in the wake of the deadly collision.

Simpson said he was unaware of any previous fatal crashes involving an autonomous vehicle and a pedestrian.

Please like, share and tweet this article.

Pass it on: Popular Science

You’ll Never Dance Alone With This Artificial Intelligence Project

Your next dance partner might not be a person.

A new project from the Georgia Institute of Technology allows people to get jiggy with a computer-controlled dancer, which “watches” the person and improvises its own moves based on prior experiences.

When the human responds, the computerized figure or “virtual character” reacts again, creating an impromptu dance couple based on artificial intelligence (AI).

The LuminAI project is housed inside a 15-foot-tall geodesic dome, designed and constructed by Georgia Tech digital media master’s student Jessica Anderson, and lined with custom-made projection panels for dome projection mapping.

The surfaces allow people to watch their own shadowy avatar as it struts with a virtual character named VAI, which learns how to dance by paying attention to which moves the current user is doing and when.

The more moves it sees, the better and deeper the computer’s dance vocabulary. It then uses this vocabulary as a basis for future improvisation.

The system uses Kinect devices to capture the person’s movement, which is then projected as a digitally enhanced silhouette on the dome’s screens.

The computer analyzes the dance moves being performed and leans on its memory to choose its next move.

The team says this improvisation is one of the most important parts of the project. The avatar recognizes patterns, but doesn’t always react the same way every time.

That means that the person must improvise too, which leads to greater creativity all around. All the while, the computer is capturing these new experiences and storing the information to use as a basis for future dance sessions.

LuminAI was unveiled for the first time this past weekend in Atlanta at the Hambidge Art Auction in partnership with the Goat Farm Arts Center.

It was featured within a dance and technology performance, in a work called Post, as a finalist for the Field Experiment ATL grant. T. Lang Dance performed set choreography with avatars and virtual characters within the dome.

Post is the fourth and final installment of Lang’s Post Up series, which focuses on the stark realities and situational complexities after an emotional reunion between long lost souls.

Please like, share and tweet this article.

Pass it on: New Scientist

Google Clips: A Smart Camera That Doesn’t Make The Grade

Picture this: you’re hanging out with your kids or pets and they spontaneously do something interesting or cute that you want to capture and preserve.

But by the time you’ve gotten your phone out and its camera opened, the moment has passed and you’ve missed your opportunity to capture it.

That’s the main problem that Google is trying to solve with its new Clips camera, a $249 device available starting today that uses artificial intelligence to automatically capture important moments in your life.

Google says it’s for all of the in-between moments you might miss when your phone or camera isn’t in your hand.

It is meant to capture your toddler’s silly dance or your cat getting lost in an Amazon box without requiring you to take the picture.

The other issue Google is trying to solve with Clips is letting you spend more time interacting with your kids directly, without having a phone or camera separating you, while still getting some photos.

That’s an appealing pitch to both parents and pet owners alike, and if the Clips camera system is able to accomplish its goal, it could be a must-have gadget for them.

But if it fails, then it’s just another gadget that promises to make life easier, but requires more work and maintenance than it’s worth.

The problem for Google Clips is it just doesn’t work that well.

Before we get into how well Clips actually works, I need to discuss what it is and what exactly it’s doing because it really is unlike any camera you’ve used before.

At its core, the Clips camera is a hands-free automatic point-and-shoot camera that’s sort of like a GoPro, but considerably smaller and flatter.

It has a cute, unassuming appearance that is instantly recognizable as a camera, or at least an icon of a camera app on your phone.

Google, aware of how a “camera that automatically takes pictures when it sees you” is likely to be perceived, is clearly trying to make the Clips appear friendly, with its white-and-teal color scheme and obvious camera-like styling.

But of those that I showed the camera to while explaining what it’s supposed to do, “it’s creepy” has been a common reaction.

One thing that I’ve discovered is that people know right away it’s a camera and react to it just like other any camera.

That might mean avoiding its view when they see it, or, like in the case of my three-year-old, walking up to it and smiling or picking it up.

That has made it tough to capture candids, since, for the Clips to really work, it needs to be close to its subject.

Maybe over time, your family would learn to ignore it and those candid shots could happen, but in my couple weeks of testing, my family hasn’t acclimated to its presence.

The Clips’ camera sensor can capture 12-megapixel images at 15 frames per second, which it then saves to its 16GB of internal storage that’s good for about 1,400 seven-second clips.

The battery lasts roughly three hours between charges.

Included with the camera is a silicone case that makes it easy to prop up almost anywhere or, yes, clip it to things. It’s not designed to be a body camera or to be worn.

Instead, it’s meant to be placed in positions where it can capture you in the frame as well.

There are other accessories you can buy, like a case that lets you mount the Clips camera to a tripod for more positioning options, but otherwise, using the Clips camera is as simple as turning it on and putting it where you want it.

Once the camera has captured a bunch of clips, you use the app to browse through them on your phone, edit them down to shorter versions, grab still images, or just save the whole thing to your phone’s storage for sharing and editing later.

The Clips app is supposed to learn based on which clips you save and deem “important” and then prioritize capturing similar clips in the future.

You can also hit a toggle to view “suggested” clips for saving, which is basically what the app thinks you’ll like out of the clips it has captured.

Google’s definitely onto something here. The idea is an admirable first step toward a new kind of camera that doesn’t get between me and my kids. But first steps are tricky — ask any toddler!

Usually, after you take your first step, you fall down. To stand back up, Google Clips needs to justify its price, the hassle of setting it up, and the fiddling between it and my phone.

It needs to reassure me that by trusting it and putting my phone away, I won’t miss anything important, and I won’t be burdened by having to deal with a lot of banal captures.

Otherwise, it’s just another redundant gadget that I have to invest too much time and effort into managing to get too little in return.

That’s a lot to ask of a tiny little camera, and this first version doesn’t quite get there. To live up to it all, Clips needs to be both a better camera and a smarter one.

Please like, share and tweet this article.

Pass it on: Popular Science

The Myths Of Robots: They Are Strong, Smart And Evil

Some of today’s top techies and scientists are very publicly expressing their concerns over apocalyptic scenarios that are likely to arise as a result of machines with motives.

Among the fearful are intellectual heavyweights like Stephen Hawking, Elon Musk, and Bill Gates, who all believe that advances in the field of machine learning will soon yield self-aware A.I.s that seek to destroy us.

Or perhaps just apathetically dispose of us, much like scum getting obliterated by a windshield wiper.

In fact, Dr. Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.”

Indeed, there is little doubt that future A.I. will be capable of doing significant damage.

For example, it is conceivable that robots could be programmed to function as tremendously dangerous autonomous weapons unlike any seen before.

Additionally, it is easy to imagine an unconstrained software application that spreads throughout the Internet, severely mucking up our most efficient and relied upon medium for global exchange.

But these scenarios are categorically different from ones in which machines decide to turn on us, defeat us, make us their slaves, or exterminate us.

In this regard, we are unquestionably safe. On a sadder note, we are just as unlikely to someday have robots that decide to befriend us or show us love without being specifically prompted by instructions to do so.

This is because such intentional behavior from an A.I. would undoubtedly require a mind, as intentionality can only arise when something possesses its own beliefs, desires, and motivations.

The type of A.I. that includes these features is known amongst the scientific community as “Strong Artificial Intelligence”. Strong A.I., by definition, should possess the full range of human cognitive abilities.

This includes self-awareness, sentience, and consciousness, as these are all features of human cognition.

On the other hand, “Weak Artificial Intelligence” refers to non-sentient A.I. The Weak A.I. Hypothesis states that our robots—which run on digital computer programs—can have no conscious states, no mind, no subjective awareness, and no agency.

Such A.I. cannot experience the world qualitatively, and although they may exhibit seemingly intelligent behavior, it is forever limited by the lack of a mind.

A failure to recognize the importance of this strong/weak distinction could be contributing to Hawking and Musk’s existential worries, both of whom believe that we are already well on a path toward developing Strong A.I.

To them it is not a matter of “if”, but “when”.

But the fact of the matter is that all current A.I. is fundamentally Weak A.I., and this is reflected by today’s computers’ total absence of any intentional behavior whatsoever.

Although there are some very complex and relatively convincing robots out there that appear to be alive, upon closer examination they all reveal themselves to be as motiveless as the common pocket calculator.

This is because brains and computers work very differently. Both compute, but only one understands—and there are some very compelling reasons to believe that this is not going to change.

It appears that there is a more technical obstacle that stands in the way of Strong A.I. ever becoming a reality.

Please like, share and tweet this article.

Pass it on: Popular Science

You Might Apply For Your Next Job By Playing A Mobile Game

The next job you apply for could involve a challenge even before you submit your resume. Two companies are gamifying the recruiting process to change the way they search for talented candidates.

It’s not surprising, given the major shift in the way people look for jobs over the last decade.

Research from the Boston Consulting Group and Recruit Works Institute reveals that 55% of searches globally happened through Internet job sites and 35% via a smartphone.

Applying through social media and submitting a video interview are rapidly becoming more accepted.

But on the recruiters’ side, things aren’t changing as quickly, even though 95% of companies admit to making bad hires each year.

Communication channels are broken or unused as employers invest resources in less efficient ways to attract talent.

According to Talent Board, as many as 88% of employers are allowing more candidates to complete their applications even after they fail screening questions.

And those who rely on software to automate the recruitment process could unknowingly be discriminating against qualified, diverse candidates.

Changing recruiting wasn’t the original intent for CodeFights. The platform was designed to offer users a way to learn and improve their coding skills by proposing, solving, and discussing challenges with other programmers.

Please like, share and tweet this article.

Pass it on: New Scientist

When A Machine Learning Algorithm Studied Fine Art Paintings, It Saw Things Art Historians Had Never Noticed

Georges Braque’s Man with a Violin (left) and Pablo Picasso’s Spanish Still Life: Sun and Shadow, both painted in 1912

The task of classifying pieces of fine art is hugely complex. When examining a painting, an art expert can usually determine its style, its genre, the artist and the period to which it belongs.

Art historians often go further by looking for the influences and connections between artists, a task that is even trickier.

So the possibility that a computer might be able to classify paintings and find connections between them at first glance seems laughable.

And yet, that is exactly what Babak Saleh and pals have done at Rutgers University in New Jersey.

These guys have used some of the latest image processing and classifying techniques to automate the process of discovering how great artists have influenced each other.

They have even been able to uncover influences between artists that art historians have never recognized until now.

The way art experts approach this problem is by comparing artworks according to a number of high-level concepts such as the artist’s use of space, texture, form, shape, color and so on.

Experts may also consider the way the artist uses movement in the picture, harmony, variety, balance, contrast, proportion and pattern.

Other important elements can include the subject matter, brushstrokes, meaning, historical context and so on. Clearly, this is a complex business.

So it is easy to imagine that the limited ability computers have for analyzing two-dimensional images would make this process more or less impossible to automate. But Salah and co show how it can be done.

At the heart of their method, is a new technique developed at Dartmouth College in New Hampshire and Microsoft research in Cambridge, UK, for classifying pictures according to the visual concepts that they contain.

These concepts are called classemes and include everything from simple object description such as duck, frisbee, man, wheelbarrow to shades of color to higher-level descriptions such as dead body, body of water, walking and so on.

Comparing images is then a process of comparing the words that describe them, for which there are a number of well-established techniques.

Left: Portrait of Pope Innocent X (1650) by Diego Velazquez. Right: Study After Vel´azquez’s Portrait of Pope Innocent X (1953) by Francis

For each painting, they limit the number of concepts and points of interest generated by their method to 3000 in the interests of efficient computation.

This process generates a list of describing words that can be thought of as a kind of vector. The task is then to look for similar vectors using natural language techniques and a machine learning algorithm.

Determining influence is harder though since influence is itself a difficult concept to define. Should one artist be deemed to influence another if one painting has a strong similarity to another?

Or should there be a number of similar paintings and if so how many?

So Saleh and co experiment with a number of different metrics. They end up creating two-dimensional graphs with metrics of different kinds on each axis and then plotting the position of all of the artists in this space to see how they are clustered.

The results are interesting. In many cases, their algorithm clearly identifies influences that art experts have already found.

For example, the graphs show that the Austrian painter Klimt is close to Picasso and Braque and indeed experts are well acquainted with the idea that Klimt was influenced by both these artists.

The algorithm also identifies the influence of the French romantic Delacroix on the French impressionist Bazille, the Norwegian painter Munch’s influence on the German painter Beckmann and Degas’ influence on Caillebotte.

The algorithm is also able to identify individual paintings that have influenced others.

It picked out Georges Braque’s Man with a Violin and Pablo Picasso’s Spanish Still Life: Sun and Shadow, both painted in 1912 with a well-known connection as pictures that helped found the Cubist movement.

And yet a visual inspection shows a clear link. The yellow circles in the images below show similar objects, the red lines show composition and the blue square shows a similar structural element, say Saleh and co.

That is interesting stuff. Of course, Saleh and co do not claim that this kind of algorithm can take the place of an art historian.

After all, the discovery of a link between paintings in this way is just the starting point for further research about an artist’s life and work.

But it is a fascinating insight into the way that machine learning techniques can throw new light on a subject as grand and well studied as the history of art.

Please like, share and tweet this article.

Pass it on: New Scientist

What Is Alphabet’s Chronicle?

Google’s parent company Alphabet has launched a business that will specialize in leveraging machine learning in cyber security, called Chronicle.

Chief executive Stephen Gillett says that the company will be split in two. On the one hand it will provide a cyber security and analytics platform that will target enterprise customers and help them “better manage and understand their own security-related data“.

The other side of the business will be VirusTotal, which is a malware intelligence service Google picked up in 2012. This will continue to operate as it has been doing.

For some years now a slew of security vendors have touted machine learning as their key differentiator against rivals on the market. There is an aspect of snake oil to some of it – see our market analysis here.

But there are also companies like the darlings of the infosec market at the moment, Darktrace, that are using genuine machine learning for threat detection.

It’s no secret that Alphabet and Google are at the frontier of machine learning and artificial intelligence.

Writing in Medium, Chronicle CEO Stephen Gillett says that while the company will be an independent business under the Google umbrella, it will “have the benefit of being able to consult the world-class experts in machine learning and cloud computing” at Alphabet.

Where did Chronicle come from?

Chronicle emerged from the labs of Alphabet’s mysteriously named X – Google’s incubation hub for ‘moonshot’ projects, in 2016, plus VirusTotal which Google bought in 2012.

CEO Stephen Gillet began working at Google in 2015 and has a history of work in cyber security companies.

Other people in leadership roles at Chronicle include Mike Wiacek and Shapor Naghibzadeh, who together have more than 20 years of security experience at Google.

Bernardo Quintero of VirusTotal will continue to work with Chronicle.

Please like, share and tweet this article.

Pass it on: Popular Science

This Artificial Intelligence Eavesdrops On Emergency Calls To Warn Of Possible Cardiac Arrests

When you phone 911, you’re patched through to a trained human who is able to properly triage your phone call.

Soon, you could also find yourself being listened to by a robot, however, who is tuning in to very different verbal information from the human emergency dispatcher.

Developed by Danish startup Corti, this emergency call-listening artificial intelligence is designed to listen to the caller for signs that they may be about to go into cardiac arrest.

When it makes such a diagnosis, it then alerts the human dispatcher so that they can take the proper steps.

Corti is meant to be a digital co-pilot for medical personnel,” Andreas Cleve, CEO of Corti, said.

Like a human doctor, Corti analyzes everything a patient says and shares in real time — from journal data, symptom descriptions, voice data, acoustic data, language data, their dialect, questions, and even their breathing patterns.

“Corti then outputs diagnostic advice to the medical personnel, to help them diagnose patients faster. This can be especially powerful in an emergency use case where mistakes can be fatal.”

As the company’s Chief Technology Officer Lars Maaloe told us, the technology framework uses deep learning neural networks trained on years of historical emergency calls.

While it hasn’t yet been peer-reviewed, the team is currently working on this. A paper describing the work is likely to published later in 2018.

Today the technology is being used in Copenhagen EMS, who have spearheaded the application of machine learning in the prehospital space worldwide,” Cleve said.

At Copenhagen EMS, our technology is able to give emergency call takers diagnostic advice in natural language, and it’s integrated directly into the software they are already using.

“Our goal is to make it easier for medical personnel to do their jobs, not complicate it further with fancier technology.

“We are extremely skeptical of the idea of rushing to replace trained medical personnel with A.I., since from both ethical and professional perspective we prefer human contact when it comes to our health.

“Personally, I simply can’t see myself preferring a bot over a medically trained human agent. But the setup where humans are amplified by A.I.? That to us is a far more powerful scenario in healthcare.”

Please like, share and tweet this article.

Pass it on: New Scientist

Google’s DeepMind AI Fakes Some Of The Most Realistic Human Voices Yet

Google’s DeepMind artificial intelligence has produced what could be some of the most realistic-sounding machine speech yet.

WaveNet, as the system is called, generates voices by sampling real human speech and directly modeling audio waveforms based on it, as well as its previously generated audio.

In Google’s tests, both English and Mandarin Chinese listeners found WaveNet more realistic than other types of text-to-speech programs, although it was less convincing than actual human speech.

If that weren’t enough, it can also play the piano rather well.

Text-to-speech programs are increasingly important for computing, as people begin to rely on bots and AI personal assistants like Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa, and the Google Assistant.

If you ask Siri or Cortana a question, though, they’ll reply with actual recordings of a human voice, rearranged and combined in small pieces.

This is called concatenative text to speech, and as one expert puts it, it’s a little like a ransom note.

The results are often fairly realistic, but as Google writes, producing a new audio persona or tone of voice requires having an actor record every possible sound in a database. Here’s one phrase, created by Google.

The alternative is parametric text to speech — building a completely computer-generated voice, using coded rules based on grammar or mouth sounds.

Parametric voices don’t need source material to produce words. But the results, at least in English, are often stilted and robotic. You can hear that here.

Google’s system is still based on real voice input. But instead of chopping up recordings, it learns from them, then independently creates its own sounds in a variety of voices. The results are something like this.

Granted, there’s already plenty of generative music, and it’s not nearly as complicated as making speech that humans will recognize as their own.

On a scale from 1 (not realistic) to 5 (very realistic), listeners in around 500 blind tests rated WaveNet at 4.21 in English and 4.08 in Mandarin.

While even human speech didn’t get a perfect 5, it was still higher, at 4.55 in English and 4.21 in Mandarin. On the other hand, WaveNet outperformed other methods by a wide margin.

Please like, share and tweet this article.

Pass it on: New Scientist

Google’s AI Found An Overlooked Exoplanet

NASA has discovered an eighth planet around a distant star, which means we’re no longer the largest solar system we know of.

The discovery was made thanks to some artificial intelligence help from Google, which found the planet by scouring previously overlooked “weak” signals in data captured by the Kepler Space Telescope.

The newly found planet is located in the solar system around Kepler-90, a star about 2,500 light-years away from Earth that was previously discovered in 2014.

The Kepler Space Telescope has been searching the galactic sky for exoplanets, or planets outside our own Solar System, since it launched in 2009.

In order to sift through all the data that it’s captured since that launch, scientists usually look at the strongest signals first.

And that process has worked well enough so far. NASA has confirmed 2,525 exoplanets in that time, a number that has changed our understanding of how common it is to find planets around the stars that make up our galaxy.

Recently, though, artificial intelligence has become a more prominent tool in astronomy.

Scientists, including ones who work on the Kepler data, have increasingly turned to machine learning to help sort through typically lower-priority data to see what they might have missed.

In the process, they found an overlooked planet that’s now named Kepler-90i.

But while we now know that Kepler-90 has the same number of orbiting planets as our Sun, the solar system is a poor candidate in the search for extraterrestrial life or at least, life as we know it.

Kepler-90 is about 20 percent bigger and 5 percent warmer than our Sun. And its eight planets dance around the star in much closer orbits than the ones in our own Solar System.

In fact, their orbits are so comparatively small that seven of Kepler-90’s eight planets would fit in between the Earth and the Sun.

The discovery of Kepler-90i, came after NASA let Google train its machine learning algorithms on 15,000 signals from potential planets in the Kepler database.

The scientists then took the trained system and set it to work on data from 670 stars that were already known to have multiple planets, as they considered those to be the most likely hiding places.

The newly discovered planet in Kepler-90, along with one other found in the Kepler-80 solar system announced today, are the first NASA was able to confirm from these new results from Google’s AI.

The inclusion of machine learning in this process shouldn’t scare humans whose livelihood revolves around discovering and studying exoplanets, according to Chris Shallue, a senior Google AI software engineer who worked on the project.

What we’ve developed here is a tool to help astronomers have more impact,” Shallue said on a conference call about the news.

It’s a way to increase the productivity of astronomers. It certainly won’t replace them at all.

Please like, share and tweet this article.

Pass it on: Popular Science