The future of driving doesn’t involve driving — at all.
That’s the big takeaway from a first peek inside General Motors new autonomous car, which lacks the steering wheel, pedals, manual controls and human drivers that have come to define the experience of riding inside an automobile for more than a century.
The means the Cruise AV — a fourth-generation autonomous vehicle based on the Chevy Bolt EV — is in total control.
GM submitted a petition Thursday to the Department of Transportation, asking for the government to let it roll out the new vehicle, which it says is safe.
GM plans to mass produce the vehicle as early as next year, the automotive giant announced Friday.
The manufacturer is touting the vehicle as the world’s “first production-ready vehicle” built with the sole purpose of operating “safely on its own with no driver,” a degree of independence known as “level 4 autonomy.”
GM is one of several companies testing level 4 vehicles. A California-based autonomous vehicle startup called Zoox and Alphabet’s Waymo have also tested level 4 cars.
GM is already testing second and third generation self-driving Cruise AVs on busy streets in San Francisco and Phoenix with a human engineer in the vehicle.
It relies on cameras, radar and high-precision laser sensors known as lidar for navigation.
Beginning in 2019, the fourth-generation of that vehicle will be used in a ride-sharing program in multiple American cities, where “the vehicles will travel on a fixed route controlled by their mapping system,” Bloomberg reported.
To improve safety, the vehicles will share information with one another and rely on two computer systems, which operate simultaneously so that if one computer encounters a problem, the second computer can serve as a backup, according to GM’s self-driving safety report.
The report says the Cruise AV was designed to operate in chaotic, fluid conditions, such as aggressive drivers, jaywalkers, bicyclists, delivery trucks and construction.
The company has access to vast dealership networks, nationwide influence and manufacturing prowess, potentially offering a GM-driven ride-hailing service the opportunity to supplant the Silicon Valley start-ups that have been seeking for years to disrupt the auto industry.
Google has previously discovered lost tribes, missing ships and even a forgotten forest. But now it has also found two entire planets.
The technology giant used one its algorithms to sift through thousands of signals sent back to Earth by Nasa’s Kepler space telescope.
One of the new planets was found hiding in the Kepler-90 star system, which is around 2,200 light years away from Earth.
The discovery is important because it takes the number of planets in the star system up to eight, the same as our own Solar System. It is the first time that any system has been found to have as many planets ours.
Andrew Vanderburg, astronomer and Nasa Sagan Postdoctoral Fellow at The University of Texas, Austin, said: “The Kepler-90 star system is like a mini version of our solar system.
You have small planets inside and big planets outside, but everything is scrunched in much closer.
“There is a lot of unexplored real estate in Kepler-90 system and it would almost be surprising if there were not more planets in the system.”
The planet Kepler-90i, is a small rocky planet, which orbits so close to its star that the surface temperature is a ‘scorchingly hot’ 800F (426C). It orbits its own sun once every 14 days.
The Google team applied a neural network to scan weak signals discovered by the Kepler exoplanet-hunting telescope which had been missed by humans.
Kepler has already discovered more than 2,500 exoplanets and 1,000 more which are suspected.
The telescope spent four years scanning 150,000 stars looking for dips in their brightness which might suggest an orbiting planet was passing in front.
Although the observation mission ended in 2013, the spacecraft recorded so much data during its four year mission that scientists expect will be crunching the data for many years to come.
Christopher Shallue, senior software engineer at Google AI in Mountain View, California, who made the discovery, said the algorithm was so simple that it only took two hours to train to spot exoplanets.
Test of the neural network correctly identified true planets and false positives 96 percent of the time. They have promised to release all of the code so that amateurs can train computers to hunt for their own exoplanets.
“Machine learning will become increasingly important for keeping pace with all this data and will help us make more discoveries than ever before,” said Mr Shallue.
“This is really exciting discovery and a successful proof of concept in using neural networks to find planets even in challenging situations where signals are very weak.
“We plan to search all 150,000 stars, we hope using our technique we will be able to find lots of planets including planets like Earth.”
A new project from the Georgia Institute of Technology allows people to get jiggy with a computer-controlled dancer, which “watches” the person and improvises its own moves based on prior experiences.
When the human responds, the computerized figure or “virtual character” reacts again, creating an impromptu dance couple based on artificial intelligence (AI).
The LuminAI project is housed inside a 15-foot-tall geodesic dome, designed and constructed by Georgia Tech digital media master’s student Jessica Anderson, and lined with custom-made projection panels for dome projection mapping.
The surfaces allow people to watch their own shadowy avatar as it struts with a virtual character named VAI, which learns how to dance by paying attention to which moves the current user is doing and when.
The more moves it sees, the better and deeper the computer’s dance vocabulary. It then uses this vocabulary as a basis for future improvisation.
The system uses Kinect devices to capture the person’s movement, which is then projected as a digitally enhanced silhouette on the dome’s screens.
The computer analyzes the dance moves being performed and leans on its memory to choose its next move.
The team says this improvisation is one of the most important parts of the project. The avatar recognizes patterns, but doesn’t always react the same way every time.
That means that the person must improvise too, which leads to greater creativity all around. All the while, the computer is capturing these new experiences and storing the information to use as a basis for future dance sessions.
LuminAI was unveiled for the first time this past weekend in Atlanta at the Hambidge Art Auction in partnership with the Goat Farm Arts Center.
It was featured within a dance and technology performance, in a work called Post, as a finalist for the Field Experiment ATL grant. T. Lang Dance performed set choreography with avatars and virtual characters within the dome.
Post is the fourth and final installment of Lang’s Post Up series, which focuses on the stark realities and situational complexities after an emotional reunion between long lost souls.
Some of today’s top techies and scientists are very publicly expressing their concerns over apocalyptic scenarios that are likely to arise as a result of machines with motives.
Among the fearful are intellectual heavyweights like Stephen Hawking, Elon Musk, and Bill Gates, who all believe that advances in the field of machine learning will soon yield self-aware A.I.s that seek to destroy us.
Or perhaps just apathetically dispose of us, much like scum getting obliterated by a windshield wiper.
In fact, Dr. Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.”
Indeed, there is little doubt that future A.I. will be capable of doing significant damage.
For example, it is conceivable that robots could be programmed to function as tremendously dangerous autonomous weapons unlike any seen before.
Additionally, it is easy to imagine an unconstrained software application that spreads throughout the Internet, severely mucking up our most efficient and relied upon medium for global exchange.
But these scenarios are categorically different from ones in which machines decide to turn on us, defeat us, make us their slaves, or exterminate us.
In this regard, we are unquestionably safe. On a sadder note, we are just as unlikely to someday have robots that decide to befriend us or show us love without being specifically prompted by instructions to do so.
This is because such intentional behavior from an A.I. would undoubtedly require a mind, as intentionality can only arise when something possesses its own beliefs, desires, and motivations.
The type of A.I. that includes these features is known amongst the scientific community as “Strong Artificial Intelligence”. Strong A.I., by definition, should possess the full range of human cognitive abilities.
This includes self-awareness, sentience, and consciousness, as these are all features of human cognition.
On the other hand, “Weak Artificial Intelligence” refers to non-sentient A.I. The Weak A.I. Hypothesis states that our robots—which run on digital computer programs—can have no conscious states, no mind, no subjective awareness, and no agency.
Such A.I. cannot experience the world qualitatively, and although they may exhibit seemingly intelligent behavior, it is forever limited by the lack of a mind.
A failure to recognize the importance of this strong/weak distinction could be contributing to Hawking and Musk’s existential worries, both of whom believe that we are already well on a path toward developing Strong A.I.
To them it is not a matter of “if”, but “when”.
But the fact of the matter is that all current A.I. is fundamentally Weak A.I., and this is reflected by today’s computers’ total absence of any intentional behavior whatsoever.
Although there are some very complex and relatively convincing robots out there that appear to be alive, upon closer examination they all reveal themselves to be as motiveless as the common pocket calculator.
This is because brains and computers work very differently. Both compute, but only one understands—and there are some very compelling reasons to believe that this is not going to change.
It appears that there is a more technical obstacle that stands in the way of Strong A.I. ever becoming a reality.
The next job you apply for could involve a challenge even before you submit your resume. Two companies are gamifying the recruiting process to change the way they search for talented candidates.
It’s not surprising, given the major shift in the way people look for jobs over the last decade.
Research from the Boston Consulting Group and Recruit Works Institute reveals that 55% of searches globally happened through Internet job sites and 35% via a smartphone.
Applying through social media and submitting a video interview are rapidly becoming more accepted.
But on the recruiters’ side, things aren’t changing as quickly, even though 95% of companies admit to making bad hires each year.
Communication channels are broken or unused as employers invest resources in less efficient ways to attract talent.
According to Talent Board, as many as 88% of employers are allowing more candidates to complete their applications even after they fail screening questions.
And those who rely on software to automate the recruitment process could unknowingly be discriminating against qualified, diverse candidates.
Changing recruiting wasn’t the original intent for CodeFights. The platform was designed to offer users a way to learn and improve their coding skills by proposing, solving, and discussing challenges with other programmers.
The task of classifying pieces of fine art is hugely complex. When examining a painting, an art expert can usually determine its style, its genre, the artist and the period to which it belongs.
Art historians often go further by looking for the influences and connections between artists, a task that is even trickier.
So the possibility that a computer might be able to classify paintings and find connections between them at first glance seems laughable.
And yet, that is exactly what Babak Saleh and pals have done at Rutgers University in New Jersey.
These guys have used some of the latest image processing and classifying techniques to automate the process of discovering how great artists have influenced each other.
They have even been able to uncover influences between artists that art historians have never recognized until now.
The way art experts approach this problem is by comparing artworks according to a number of high-level concepts such as the artist’s use of space, texture, form, shape, color and so on.
Experts may also consider the way the artist uses movement in the picture, harmony, variety, balance, contrast, proportion and pattern.
Other important elements can include the subject matter, brushstrokes, meaning, historical context and so on. Clearly, this is a complex business.
So it is easy to imagine that the limited ability computers have for analyzing two-dimensional images would make this process more or less impossible to automate. But Salah and co show how it can be done.
At the heart of their method, is a new technique developed at Dartmouth College in New Hampshire and Microsoft research in Cambridge, UK, for classifying pictures according to the visual concepts that they contain.
These concepts are called classemes and include everything from simple object description such as duck, frisbee, man, wheelbarrow to shades of color to higher-level descriptions such as dead body, body of water, walking and so on.
Comparing images is then a process of comparing the words that describe them, for which there are a number of well-established techniques.
For each painting, they limit the number of concepts and points of interest generated by their method to 3000 in the interests of efficient computation.
This process generates a list of describing words that can be thought of as a kind of vector. The task is then to look for similar vectors using natural language techniques and a machine learning algorithm.
Determining influence is harder though since influence is itself a difficult concept to define. Should one artist be deemed to influence another if one painting has a strong similarity to another?
Or should there be a number of similar paintings and if so how many?
So Saleh and co experiment with a number of different metrics. They end up creating two-dimensional graphs with metrics of different kinds on each axis and then plotting the position of all of the artists in this space to see how they are clustered.
The results are interesting. In many cases, their algorithm clearly identifies influences that art experts have already found.
For example, the graphs show that the Austrian painter Klimt is close to Picasso and Braque and indeed experts are well acquainted with the idea that Klimt was influenced by both these artists.
The algorithm also identifies the influence of the French romantic Delacroix on the French impressionist Bazille, the Norwegian painter Munch’s influence on the German painter Beckmann and Degas’ influence on Caillebotte.
The algorithm is also able to identify individual paintings that have influenced others.
It picked out Georges Braque’s Man with a Violin and Pablo Picasso’s Spanish Still Life: Sun and Shadow, both painted in 1912 with a well-known connection as pictures that helped found the Cubist movement.
And yet a visual inspection shows a clear link. The yellow circles in the images below show similar objects, the red lines show composition and the blue square shows a similar structural element, say Saleh and co.
That is interesting stuff. Of course, Saleh and co do not claim that this kind of algorithm can take the place of an art historian.
After all, the discovery of a link between paintings in this way is just the starting point for further research about an artist’s life and work.
But it is a fascinating insight into the way that machine learning techniques can throw new light on a subject as grand and well studied as the history of art.
Google’s parent company Alphabet has launched a business that will specialize in leveraging machine learning in cyber security, called Chronicle.
Chief executive Stephen Gillett says that the company will be split in two. On the one hand it will provide a cyber security and analytics platform that will target enterprise customers and help them “better manage and understand their own security-related data“.
The other side of the business will be VirusTotal, which is a malware intelligence service Google picked up in 2012. This will continue to operate as it has been doing.
For some years now a slew of security vendors have touted machine learning as their key differentiator against rivals on the market. There is an aspect of snake oil to some of it – see our market analysis here.
But there are also companies like the darlings of the infosec market at the moment, Darktrace, that are using genuine machine learning for threat detection.
It’s no secret that Alphabet and Google are at the frontier of machine learning and artificial intelligence.
Writing in Medium, Chronicle CEO Stephen Gillett says that while the company will be an independent business under the Google umbrella, it will “have the benefit of being able to consult the world-class experts in machine learning and cloud computing” at Alphabet.
Where did Chronicle come from?
Chronicle emerged from the labs of Alphabet’s mysteriously named X – Google’s incubation hub for ‘moonshot’ projects, in 2016, plus VirusTotal which Google bought in 2012.
CEO Stephen Gillet began working at Google in 2015 and has a history of work in cyber security companies.
Other people in leadership roles at Chronicle include Mike Wiacek and Shapor Naghibzadeh, who together have more than 20 years of security experience at Google.
Bernardo Quintero of VirusTotal will continue to work with Chronicle.
When you phone 911, you’re patched through to a trained human who is able to properly triage your phone call.
Soon, you could also find yourself being listened to by a robot, however, who is tuning in to very different verbal information from the human emergency dispatcher.
Developed by Danish startup Corti, this emergency call-listening artificial intelligence is designed to listen to the caller for signs that they may be about to go into cardiac arrest.
When it makes such a diagnosis, it then alerts the human dispatcher so that they can take the proper steps.
“Corti is meant to be a digital co-pilot for medical personnel,” Andreas Cleve, CEO of Corti, said.
“Like a human doctor, Corti analyzes everything a patient says and shares in real time — from journal data, symptom descriptions, voice data, acoustic data, language data, their dialect, questions, and even their breathing patterns.
“Corti then outputs diagnostic advice to the medical personnel, to help them diagnose patients faster. This can be especially powerful in an emergency use case where mistakes can be fatal.”
As the company’s Chief Technology Officer Lars Maaloe told us, the technology framework uses deep learning neural networks trained on years of historical emergency calls.
While it hasn’t yet been peer-reviewed, the team is currently working on this. A paper describing the work is likely to published later in 2018.
“Today the technology is being used in Copenhagen EMS, who have spearheaded the application of machine learning in the prehospital space worldwide,” Cleve said.
“At Copenhagen EMS, our technology is able to give emergency call takers diagnostic advice in natural language, and it’s integrated directly into the software they are already using.
“Our goal is to make it easier for medical personnel to do their jobs, not complicate it further with fancier technology.
“We are extremely skeptical of the idea of rushing to replace trained medical personnel with A.I., since from both ethical and professional perspective we prefer human contact when it comes to our health.
“Personally, I simply can’t see myself preferring a bot over a medically trained human agent. But the setup where humans are amplified by A.I.? That to us is a far more powerful scenario in healthcare.”
Google’s DeepMind artificial intelligence has produced what could be some of the most realistic-sounding machine speech yet.
WaveNet, as the system is called, generates voices by sampling real human speech and directly modeling audio waveforms based on it, as well as its previously generated audio.
In Google’s tests, both English and Mandarin Chinese listeners found WaveNet more realistic than other types of text-to-speech programs, although it was less convincing than actual human speech.
If that weren’t enough, it can also play the piano rather well.
Text-to-speech programs are increasingly important for computing, as people begin to rely on bots and AI personal assistants like Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa, and the Google Assistant.
If you ask Siri or Cortana a question, though, they’ll reply with actual recordings of a human voice, rearranged and combined in small pieces.
This is called concatenative text to speech, and as one expert puts it, it’s a little like a ransom note.
The results are often fairly realistic, but as Google writes, producing a new audio persona or tone of voice requires having an actor record every possible sound in a database. Here’s one phrase, created by Google.
The alternative is parametric text to speech — building a completely computer-generated voice, using coded rules based on grammar or mouth sounds.
Parametric voices don’t need source material to produce words. But the results, at least in English, are often stilted and robotic. You can hear that here.
Google’s system is still based on real voice input. But instead of chopping up recordings, it learns from them, then independently creates its own sounds in a variety of voices. The results are something like this.
Granted, there’s already plenty of generative music, and it’s not nearly as complicated as making speech that humans will recognize as their own.
On a scale from 1 (not realistic) to 5 (very realistic), listeners in around 500 blind tests rated WaveNet at 4.21 in English and 4.08 in Mandarin.
While even human speech didn’t get a perfect 5, it was still higher, at 4.55 in English and 4.21 in Mandarin. On the other hand, WaveNet outperformed other methods by a wide margin.
Searching the stars for unique phenomena is not an easy process.
The problem is that space is simply too big, too diverse, and too wonderful.
Locating a specific kind of anomaly among the many wondrous sights scattered throughout the cosmos is near impossible for humans, without easily-distracted brains.
With so many stars to check, the process of scanning the galaxy to find planets like our own can take a lot of time and effort.
Thankfully, artificial intelligence can help us in the process of spotting distant stars and their neighboring planets.
NASA has announced that, thanks to an AI program that was given the task of spotting cool stuff in space, the agency has been able to find a solar system that looks uncannily like our own; albeit in miniature form.
The Kepler-90 system exists a distant 2,545 light years from Earth, but has drawn attention from the astrological society after an AI noted that its series of eight planets match up well with our own.
The primary difference is that its planets orbit a lot closer to the sun than those in our solar system, with the newly discovered Kepler-90i making a full rotation around the star in a matter of just fourteen Earth days.
In order to locate Kepler-90’s planets NASA’s AI had to scan through a daunting thirty five thousand potential signals from distant stars, over a period of four years.
This is where machine learning was able to come into play to help make the process easier—the AI was fed data from around fifteen thousand signals that NASA had previously investigated.
So the AI had a pretty good idea of what it was looking for based on the kinds of readings that NASA had flagged as noteworthy among the program’s database of reference materials.
From there, it was a simple matter of letting the AI run checks for all potential star systems against its database until the program found something that matched what it was looking for, which happened to be a bunch of newly discovered planets orbiting Kepler-90.
Kepler-90 isn’t actually the most exciting solar system in the galaxy—it’s unlikely that its super hot worlds will bear life, or even any noteworthy new discoveries.
What is special, is the fact that an AI managed to identify Kepler-90 as fitting the right parameters for investigation.
This shows that there really are benefits to employing machine learning as a technique for searching the cosmos for interesting research subjects without the need for a human to slog through thousands of signals in order to find a few interesting stars that warrant a closer look.
Essentially, NASA is building a self-teaching search engine that can trawl through all of our records of the stars to find things that look interesting, based only on a vague description of what scientists are looking for.
The future of space exploration is going to be a whole lot easier if we can trust an artificial intelligence to do all the boring stuff for us.