Tag: AI

The World’s Fastest Supercomputer Is Back In America

Last week, the US Department of Energy and IBM unveiled Summit, America’s latest supercomputer, which is expected to bring the title of the world’s most powerful computer back to America from China, which currently holds the mantle with its Sunway TaihuLight supercomputer.

With a peak performance of 200 petaflops, or 200,000 trillion calculations per second, Summit more than doubles the top speeds of TaihuLight, which can reach 93 petaflops.

Summit is also capable of over 3 billion billion mixed precision calculations per second, or 3.3 exaops, and more than 10 petabytes of memory, which has allowed researchers to run the world’s first exascale scientific calculation.

The $200 million supercomputer is an IBM AC922 system utilizing 4,608 compute servers containing two 22-core IBM Power9 processors and six Nvidia Tesla V100 graphics processing unit accelerators each.




Summit is also (relatively) energy-efficient, drawing just 13 megawatts of power, compared to the 15 megawatts TaihuLight pulls in.

Top500, the organization that ranks supercomputers around the world, is expected to place Summit atop its list when it releases its new rankings later this month.

Once it does — with these specs — Summit should remain the king of supercomputers for the immediate future.

Oak Ridge National Laboratory — the birthplace of the Manhattan Project — is also home to Titan, another supercomputer that was once the fastest in the world and now holds the title for fifth fastest supercomputer in the world.

Taking up 5,600 square-feet of floor space and weighing in at over 340 tons — which is more than a commercial aircraft — Summit is a truly massive system that would easily fill two tennis courts.

Summit will allow researchers to apply machine learning to areas like high-energy physics and human health, according to ORNL.

Summit’s AI-optimized hardware also gives researchers an incredible platform for analyzing massive datasets and creating intelligent software to accelerate the pace of discovery,” Jeff Nichols, ORNL associate laboratory director for computing and computational sciences, said.

The system is connected by 185 miles of fiber-optic cables and can store 250 petabytes of data, which is equal to 74 years of HD video.

To keep Summit from overheating, more than 4,000 gallons of water are pumped through the system every minute, carrying away nearly 13 megawatts of heat from the system.

While Summit may be the fastest supercomputer in the world, for now, it is expected to be passed by Frontier, a new supercomputer slated to be delivered to ORNL in 2021 with an expected peak performance of 1 exaflop, or 1,000 petaflops.
Please like, share and tweet this article.

Pass it on: Popular Science

Facebook Researchers Used AI To Create A Universal Music Translator

Is Facebook pumping up the volume on what AI can mean to the future of music? You can decide after having a look at what Facebook AI Research scientists have been up to.

A number of sites including The Next Web have reported that they unveiled a neural network capable of translating music from one style, genre, and set of instruments to another.

You can check out their paper, “A Universal Music Translation Network” by authors Noam Mor, Lior Wolf, Adam Polyak, Yaniv Taigman, Facebook AI Research.

A video showing the authors’ supplementary audio samples lets you hear what they did with samples ranging from symphony, string quartet, to sounds of Africa, Elvis and Rihanna samples and even human whistling.

In one example, they said they converted the audio of a Mozart symphony performed by an orchestra to an audio in the style of a pianist playing Beethoven.




Basically, a neural network has been put to work to change the style of music. Listening to samples, one wonders what the AI process is like in figuring out how to carry the music from one work to another?

Does it involve matched pitch? Memorizing musical notes? Greene said no, their approach is an “unsupervised learning method” using “high-level semantics interpretation.

Greene added that you could say “it plays be ear.” The method is unsupervised, in that it does not rely on supervision in the form of matched samples between domains or musical transcriptions, said the team.

Greene also translated, explaining that this was “a complex method of auto-encoding that allows the network to process audio from inputs it’s never been trained on.

In a bigger picture, one can mark the AI attempt to translate styles and instruments as another sure sign of an intersection being crossed between AI and music that can change our pejorative view of “machine” music as inferior and canned.

Please like, share and tweet this article.

Pass it on: Popular Science

At Google I/O 2018, Expect All AI All The Time

 

For Google, its annual I/O developer conference isn’t just a place to show off the next major version of Android and get coders excited about building apps.

Though that stuff is a big part of the show, I/O is also a chance for Google to flex its AI muscle and emphasize its massive reach at a time when every major tech company is racing to best each other in artificial intelligence.

And with its emphasis on cloud-based software and apps, I/O is the most important event of the year for Google—as least as long as its hardware efforts are still such a small fraction of its overall business.




Android P Is For … Probably?

Just like every year, Android will be front and center at the 2018 edition of IO. It’s almost a guarantee that we’ll see a new version of Android P, which was first released as a developer preview in March.

far, we know that a lot of the changes from Android O to P have been visual in nature; notifications have been redesigned, and the quick settings menu has gotten a refresh.

There’s also been a lot of chatter around “Material Design 2,” the next iteration of Google’s unifying design language.

Material Design was first unveiled at I/O four years ago, so it’s quite possible we’ll see the next version’s formal.

Newly redesigned Chrome tabs have already been spotted as part of a possible Material Design refresh, along with references to a “touch optimized” Chrome.

Talkin’ About AI

But artificial intelligence, more than Android and Chrome OS, is likely to be the thread that weaves every platform announcement at I/O together.

Whether that’s in consumer-facing apps like Google Assistant and Google Photos, cloud-based machine learning engines like TensorFlow, or even keynote mentions of AI’s impact on jobs.

Speaking of Google Assistant, late last week Google shared some notable updates around the voice-powered digital helper, which now runs on more than 5,000 devices and even allows you to purchase Fandango tickets with your voice.

That’s all well and fun, but one of the most critical aspects of any virtual assistant (in addition to compatibility) is how easy it is to use.

It wouldn’t be entirely surprising to see Google taking steps to make Assistant that much more accessible, whether that’s through software changes, like “slices” of Assistant content that shows up outside of the app, or hardware changes that involve working with OEM partners to offer more quick-launch solutions.

Google’s day-one keynote kicks off today, Tuesday May 8, at 10 am Pacific time.

Please like, share and tweet this article.

Pass it on: Popular Science

How To Watch Mark Zuckerberg’s Keynote At Facebook’s F8 Developer Conference

Facebook’s annual F8 developer conference kicks off this morning, just roughly a month and a half since the Cambridge Analytica scandal completely redefined the conversation around data privacy and social networking platforms.

That means F8’s keynote address, which in years past has focused on the frontiers of new technology like virtual and augmented reality and artificial intelligence, will also have to reckon with the hard conversations on responsibility and accountability that have made up Facebook’s biggest existential crisis to date.

The whole controversy may have even postponed the company’s plans to reveal its rumored smart speaker, known internally as Portal, at F8 amid fears of Facebook’s overreach and concerns over having the company listening inside consumers’ homes.




Of course, there will be news completely unrelated to Cambridge Analytica. Facebook is expected to talk more about its plans for VR hardware over at Oculus.

We’ll hear more about the company’s push into AR to take on Google and Snapchat since first debuting its intelligent camera platform at last year’s F8.

We’ll also hear more about the company’s secretive Building 8 division, which this time a year ago announced it was working on brain-computer interfaces.

Former DARPA director Regina Dugan has since left her post as head of Building 8, so we’re eager to hear how those more outlandish projects are coming along in her absence.

There’s a keynote on day two that takes place at 1PM ET / 10 AM PT on Wednesday, May 2nd, and that will likely be when we’ll hear more about Building 8 developments.

But the Cambridge Analytica situation has forced Facebook to make radical changes to its developer platform, which makes a developer conference like F8 an especially interesting time to hear how the company plans to move forward with its platform and entice app makers to build products on top of its core service.

Facebook has restricted or shut down numerous high-profile APIs and curtailed developers’ access to user data in a variety of ways, in hopes of preventing future data abuse situations.

So what has typically been a rather quiet, developer-focused affair has been transformed into more of a litmus test for Facebook’s handling of the data privacy scandal.

Naturally, everyone’s eyes will be on CEO Mark Zuckerberg and how he plans to address the elephant in the room when he takes the stage for today’s opening keynote.

If you’re interested in tuning in live and following along with The Verge’s coverage, see below for the best ways to do so.

How to follow along?

Starting time: San Francisco: 10AM / New York: 1PM / London: 6PM / Berlin: 7PM / Moscow: 8PM / New Delhi: 10:30PM / Beijing: 12:30AM (May 2nd) / Tokyo: 2AM (May 2nd) / Sydney: 3AM (May 2nd)

Live stream: Facebook will be live streaming the keynote over on its dedicated F8 website.

Please like, share and tweet this article.

Pass it on: Popular Science

MIT’s New Wearable Device Can ‘Hear’ The Words You Say In Your Head

If you’ve read any sort of science fiction, it’s likely you’ve heard about sub-vocalization, the practice of silently saying words in your head.

It’s common when we read (though it does slow you down), but it’s only recently begun to be used as a way to interact with our computers and mobile devices.

To that end, MIT researchers have created a device you wear on your face that can measure neuromuscular signals that get triggered when you subvocalize.




While the white gadget now looks like some weird medical device strapped to your face, it’s easy to see future applications getting smaller and less obvious, as well as useful with our mobile lives (including Hey Siri and OK Google situations).

The MIT system has electrodes that pick up the signals when you verbalize internally as well as bone-conduction headphones, which use vibrations delivered to the bones of your inner ear without obstructing your ear canal.

The signals are sent to a computer that uses neural networks to distinguish words. So far, the system has been used to do fun things like navigating a Roku, asking for the time and reporting your opponent’s moves in chess to get optimal counter moves via the computer, in utter silence.

The motivation for this was to build an IA device — an intelligence-augmentation device,” said MIT grad student and lead author Arnav Kapur in a statement.

Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?

Please like, share and tweet this article.

Pass it on: Popular Science

Huawei Says Three Cameras Are Better Than One With P20 Pro Smartphone

Huawei’s latest flagship smartphone is the P20 Pro, which has not one, not two, but three cameras on the back.

The new P20, and the larger, more feature-packed P20 Pro, launched at an event in Paris that indicated the Chinese company is looking to match rivals Apple and Samsung and elevate the third-largest smartphone manufacture’s premium efforts.

The P20 has a 5.8in FHD+ LCD while the larger P20 Pro has a 6.1in FHD+ OLED screen, both with a notch at the top similar to Apple’s iPhone X containing a 24-megapixel selfie camera.

They both have a fingerprint scanner on the front but no headphone socket in the bottom.

The P20 and P20 Plus are also available in pink gold or a blue twilight gradient colour finish that resembles pearlescent paint found on some cars – a first, Huawei says, for a glass-backed smartphone.




The P20 has an improved version of Huawei’s Leica dual camera system, which pairs a traditional 12-megapixel colour camera to a 20-megapixel monochrome one, as used on the recent Mate 10 Pro.

But the P20 Pro also has a third 8-megapixel telephoto camera below the first two, producing up to a 5x hybrid zoom – which Huawei says, enables the phone to “see brighter, further, faster and with richer colour”.

When I first heard that Huawei’s new flagship device was going to have three rear-facing cameras I was sceptical,” said Ben Wood, chief of research at CCS Insight.

But it feels like the company has added meaningful features rather than gimmicks, including the five-times telephoto zoom, excellent low light, long exposure performance and crisp black and white pictures the dedicated monochrome lens offers.

Huawei has also improved its built-in AI system for the camera, which recognises objects and scenes, pre-selecting the best of 19 modes for the subject.

Huawei’s AI will also help people straighten photos and zoom in or out to assist with composing group shots.

The company is also pushing its new AI-powered stablisation for both photos and videos, which Huawei says solves the problem of wobbly hands in long-exposure night shots.

Please like, share and tweet this article.

Pass it on: Popular Science

Uber Video Shows The Kind Of Crash Self-Driving Cars Are Made To Avoid

The police have released video showing the final moments before an Uber self-driving car struck and killed 49-year-old Elaine Herzberg, who was crossing the street, on Sunday night in Tempe, Arizona.

The video includes views of the human safety driver and her view of the road, and it shows Herzberg emerging from the dark just seconds before the car hits her.

And based on this evidence, it’s difficult to understand why Uber’s self-driving system—with its lidar laser sensor that sees in the dark—failed to avoid hitting Herzberg, who was slowly, steadily crossing the street, pushing a bicycle.

And if Herzberg had approached the car at a different angle, she might have confused the system’s algorithms that classify obstacles and instruct the vehicle to behave accordingly.

The situations that are more difficult is moving at odd angles to the vehicle or moving back and forth, and the vehicle has to decide, ‘Are they going into my path or are they not moving into my path?’” says Shladover.




But Herzberg and her bicycle were at a 90-degree angle to the vehicle, fully visible—and clearly heading into the car’s way.

Shladover says an obstruction, like a parked car or a tree, might also have complicated matters for the car’s sensors, and the software charged with interpreting the sensors’ data. But maps of the area show Herzberg had already crossed a shoulder and lane of road before the car struck her in the right lane.

This is one that should have been straightforward,” he says.

That means the problems could have stemmed from the sensors, the way the sensors were positioned, how the sensors’ data was created or stored, or how Uber’s software responded to that data—or a combination of all of the above.

The video also shows the safety driver, 44-year-old Rafaela Vasquez, looking down and away from the road in the moments leading up to the impact.

Uber’s drivers are charged with monitoring the technology and keeping alert, ready to take control of the vehicle at any moment. It is true that Herzberg and her bike appear suddenly from the shadows, and Vasquez may not have been able to stop the car in time to avoid hitting her.

But it’s worth asking whether Uber’s safety driver would have been ready to respond even if she had not.

This raises questions about Uber’s safety driver training. Today, potential Uber safety drivers take manual drivers tests and written assessments. They then undergo three weeks of training, first on a closed course and then on public roads.

The dynamics of the ‘operator’ are very different from that of a normal manually-driven vehicle,” says Raj Rajkumar, who researches autonomous driving at Carnegie Mellon University. “Besides identifying and fixing the technical issues, Uber must train the operators very differently.

And it raises questions about the efficacy of safety drivers in general. Can any human—even a highly trained one—be expected to pay perfect attention for hours on end, or snap out of a reverie to take control of a vehicle in an emergency?

The Tempe Police Department’s Vehicular Crimes Unit is still investigating Sunday’s incident. After its conclusion, the department will submit the case to the Maricopa County Attorney’s Office, for possible criminal charges.

An Uber spokesperson says the company is assisting authorities, and its self-driving fleets all over the country remain grounded.

Please like, share and tweet this article.

Pass it on: Popular Science

A Self-driving Uber In Arizona Kills A Woman In First Fatal Crash Involving Pedestrian

An autonomous Uber car killed a woman in the street in Arizona, police said, in what appears to be the first reported fatal crash involving a self-driving vehicle and a pedestrian in the US.

Tempe police said the self-driving car was in autonomous mode at the time of the crash and that the vehicle hit a woman, who was walking outside of the crosswalk and later died at a hospital.

There was a vehicle operator inside the car at the time of the crash.

Uber said in a statement on Twitter: “Our hearts go out to the victim’s family. We are fully cooperating with local authorities in their investigation of this incident.” A spokesman declined to comment further on the crash.

The company said it was pausing its self-driving car operations in Phoenix, Pittsburgh, San Francisco and Toronto.




Dara Khosrowshahi, Uber’s CEO, tweeted: “Some incredibly sad news out of Arizona. We’re thinking of the victim’s family as we work with local law enforcement to understand what happened.

Uber has been testing its self-driving cars in numerous states and temporarily suspended its vehicles in Arizona last year after a crash involving one of its vehicles, a Volvo SUV.

When the company first began testing its self-driving cars in California in 2016, the vehicles were caught running red lights, leading to a high-profile dispute between state regulators and the San Francisco-based corporation.

Police identified the victim as 49-year-old Elaine Herzberg and said she was walking outside of the crosswalk with a bicycle when she was hit at around 10pm on Sunday. Images from the scene showed a damaged bike.

The 2017 Volvo SUV was traveling at roughly 40 miles an hour, and it did not appear that the car slowed down as it approached the woman, said Tempe sergeant Ronald Elcock.

Elcock said he had watched footage of the collision, which has not been released to the public. He also identified the operator of the car as Rafael Vasquez, 44, and said he was cooperative and there were no signs of impairment.

The self-driving technology is supposed to detect pedestrians, cyclists and others and prevent crashes.

John M Simpson, privacy and technology project director with Consumer Watchdog, said the collision highlighted the need for tighter regulations of the nascent technology.

The robot cars cannot accurately predict human behavior, and the real problem comes in the interaction between humans and the robot vehicles,” said Simpson, whose advocacy group called for a national moratorium on autonomous car testing in the wake of the deadly collision.

Simpson said he was unaware of any previous fatal crashes involving an autonomous vehicle and a pedestrian.

Please like, share and tweet this article.

Pass it on: Popular Science

GM Will Launch Robocars Without Steering Wheels Next Year

The future of driving doesn’t involve driving — at all.

That’s the big takeaway from a first peek inside General Motors new autonomous car, which lacks the steering wheel, pedals, manual controls and human drivers that have come to define the experience of riding inside an automobile for more than a century.

The means the Cruise AV — a fourth-generation autonomous vehicle based on the Chevy Bolt EV — is in total control.

GM submitted a petition Thursday to the Department of Transportation, asking for the government to let it roll out the new vehicle, which it says is safe.




GM plans to mass produce the vehicle as early as next year, the automotive giant announced Friday.

The manufacturer is touting the vehicle as the world’s “first production-ready vehicle” built with the sole purpose of operating “safely on its own with no driver,” a degree of independence known as “level 4 autonomy.”

GM is one of several companies testing level 4 vehicles. A California-based autonomous vehicle startup called Zoox and Alphabet’s Waymo have also tested level 4 cars.

GM is already testing second and third generation self-driving Cruise AVs on busy streets in San Francisco and Phoenix with a human engineer in the vehicle.

It relies on cameras, radar and high-precision laser sensors known as lidar for navigation.

Beginning in 2019, the fourth-generation of that vehicle will be used in a ride-sharing program in multiple American cities, where “the vehicles will travel on a fixed route controlled by their mapping system,” Bloomberg reported.

To improve safety, the vehicles will share information with one another and rely on two computer systems, which operate simultaneously so that if one computer encounters a problem, the second computer can serve as a backup, according to GM’s self-driving safety report.

The report says the Cruise AV was designed to operate in chaotic, fluid conditions, such as aggressive drivers, jaywalkers, bicyclists, delivery trucks and construction.

The company has access to vast dealership networks, nationwide influence and manufacturing prowess, potentially offering a GM-driven ride-hailing service the opportunity to supplant the Silicon Valley start-ups that have been seeking for years to disrupt the auto industry.

Please like, share and tweet this article.

Pass it on: Popular Science

Google Discovers New Planet Which Proves Solar System Is Not Unique

The Kepler-90 star system has eight planets, like our own

Google has previously discovered lost tribes, missing ships and even a forgotten forest. But now it has also found two entire planets.

The technology giant used one its algorithms to sift through thousands of signals sent back to Earth by Nasa’s Kepler space telescope.

One of the new planets was found hiding in the Kepler-90 star system, which is around 2,200 light years away from Earth.

The discovery is important because it takes the number of planets in the star system up to eight, the same as our own Solar System. It is the first time that any system has been found to have as many planets ours.

Andrew Vanderburg, astronomer and Nasa Sagan Postdoctoral Fellow at The University of Texas, Austin, said: “The Kepler-90 star system is like a mini version of our solar system.

You have small planets inside and big planets outside, but everything is scrunched in much closer.

“There is a lot of unexplored real estate in Kepler-90 system and it would almost be surprising if there were not more planets in the system.”




The planet Kepler-90i, is a small rocky planet, which orbits so close to its star that the surface temperature is a ‘scorchingly hot’ 800F (426C). It orbits its own sun once every 14 days.

The Google team applied a neural network to scan weak signals discovered by the Kepler exoplanet-hunting telescope which had been missed by humans.

Kepler has already discovered more than 2,500 exoplanets and 1,000 more which are suspected.

The telescope spent four years scanning 150,000 stars looking for dips in their brightness which might suggest an orbiting planet was passing in front.

Although the observation mission ended in 2013, the spacecraft recorded so much data during its four year mission that scientists expect will be crunching the data for many years to come.

The new planet Kepler-90i is about 30 per cent larger than Earth and very hot.

Christopher Shallue, senior software engineer at Google AI in Mountain View, California, who made the discovery, said the algorithm was so simple that it only took two hours to train to spot exoplanets.

Test of the neural network correctly identified true planets and false positives 96 percent of the time. They have promised to release all of the code so that amateurs can train computers to hunt for their own exoplanets.

Machine learning will become increasingly important for keeping pace with all this data and will help us make more discoveries than ever before,” said Mr Shallue.

This is really exciting discovery and a successful proof of concept in using neural networks to find planets even in challenging situations where signals are very weak.

We plan to search all 150,000 stars, we hope using our technique we will be able to find lots of planets including planets like Earth.”

Please like, share and tweet this article.

Pass it on: Popular Science