Tag: AI

How The Media Gets AI Alarmingly Wrong

In June of last year, five researchers at Facebook’s Artificial Intelligence Research unit published an article showing how bots can simulate negotiation-like conversations.

While for the most part the bots were able to maintain coherent dialogue, the researchers found that the software agents would occasionally generate strange sentences like: “Balls have zero to me to me to me to me to me to me to me to.

On seeing these results, the team realized that they had failed to include a constraint that limited the bots to generating sentences within the parameters of spoken English, meaning that they developed a type of machine-English patois to communicate between themselves.

These findings were considered to be fairly interesting by other experts in the field, but not totally surprising or groundbreaking.

A month after this initial research was released, Fast Company published an article entitled AI Is Inventing Language Humans Can’t Understand. Should We Stop It?




The story focused almost entirely on how the bots occasionally diverged from standard English – which was not the main finding of the paper – and reported that after the researchers “realized their bots were chattering in a new language” they decided to pull the plug on the whole experiment, as if the bots were in some way out of control.

The ice of AI’s first winter only fully retreated at the beginning of this decade after a new generation of researchers started publishing papers about successful applications of a technique called “deep learning”.

While this was fundamentally a decades-old statistical technique similar to Rosenblatt’s perceptron, increases in computational power and availability of huge data sets meant that deep learning was becoming practical for tasks such as speech recognition, image recognition and language translation.

As reports of deep learning’s “unreasonable effectiveness” circulated among researchers, enrollments at universities in machine-learning classes surged, corporations started to invest billions of dollars to find talent familiar with the newest techniques, and countless startups attempting to apply AI to transport or medicine or finance were founded.

As this resurgence got under way, AI hype in the media resumed after a long hiatus.

In 2013, John Markoff wrote a feature in the New York Times about deep learning and neural networks with the headline Brainlike Computers, Learning From Experience.

Not only did the title recall the media hype of 60 years earlier, so did some of the article’s assertions about what was being made possible by the new technology.

Since then, far more melodramatic and overblown articles about “AI apocalypse”, “artificial brains”, “artificial superintelligence” and “creepy Facebook bot AIs” have filled the news feed daily.

Please like, share and tweet this article.

Pass ot on: Popular Science

Facebook And NYU Want To Use AI To Make MRI Exams 10 Times Faster

MRI scans may some day be available for a lot more people in need.

Facebook on Monday said it’s teaming up with NYU School of Medicine’s Department of Radiology to launch “fastMRI,” a collaborative research project that aims to use artificial intelligence to make MRI — magnetic resonance imaging that is 10 times faster.

Doctors and radiologists use MRI scanners to produce images that show in detail a patient’s organs, blood vessels, bones, soft issues and such, which helps doctors diagnose problems.

However, completing a MRI scan can take from 15 minutes to over an hour, according to Facebook’s blog post.

That’s challenging for children and patients in a lot of pain, who can’t lie still for a long time. It also limits how many scans the hospital can do in a day.




If the project succeeds, MRI scans could be completed in about five minutes, thus making time for more people in need to receive scans.

The idea is to actually capture less data during MRI scans, making them faster, and then use AI to “fill in views omitted from the accelerated scan,” Facebook said in its blog post. The challenge is doing this without missing any important details.

Facebook Artificial Intelligence Research, or FAIR, will work with NYU medical researchers to train artificial neural networks to recognize the structures of human body.

The project will use image data from 10,000 clinical cases with roughly 3 million MRIs of the knee, brain and liver. Patients’ names and medical information aren’t included.

We hope one day that because of this project, MRI will be able to replace a x-rays for many applications, also leading to decreased radiation exposure to patients,” said Michael Recht, MD, chair of department of radiology at NYU School of Medicine, in an email statement.

Our collaboration is one between academia and industry in which we can leverage our complementary strengths to achieve a real-world result.

Please like, share and tweet this article.

Pass it on: Popular Science

Look Out, Wiki-Geeks. Now Google Trains AI To Write Wikipedia Articles

A team within Google Brain – the web giant’s crack machine-learning research lab – has taught software to generate Wikipedia-style articles by summarizing information on web pages… to varying degrees of success.

As we all know, the internet is a never ending pile of articles, social media posts, memes, joy, hate, and blogs. It’s impossible to read and keep up with everything.

Using AI to tell pictures of dogs and cats apart is cute and all, but if such computers could condense information down into useful snippets, that would be really be handy. It’s not easy, though.

A paper, out last month and just accepted for this year’s International Conference on Learning Representations (ICLR) in April, describes just how difficult text summarization really is.




A few companies have had a crack at it. Salesforce trained a recurrent neural network with reinforcement learning to take information and retell it in a nutshell, and the results weren’t bad.

However, the computer-generated sentences are simple and short; they lacked the creative flair and rhythm of text written by humans. Google Brain’s latest effort is slightly better: the sentences are longer and seem more natural.

Here’s an example for the topic: Wings over Kansas, an aviation website for pilots and hobbyists.

The paragraph on the left is a computer-generated summary of the organization, and the one on the right is taken from the Wikipedia page on the subject.

The model works by taking the top ten web pages of a given subject – excluding the Wikipedia entry – or scraping information from the links in the references section of a Wikipedia article.

Most of the selected pages are used for training, and a few are kept back to develop and test the system.

Also, since it relies on the popularity of the first ten websites on the internet for any particular topic, if those sites aren’t particularly credible, the resulting handiwork probably won’t be very accurate either.

You can’t trust everything you read online, of course.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s AI Sounds Like A Human On The Phone

It came as a total surprise: the most impressive demonstration at Google’s I/O conference yesterday was a phone call to book a haircut. Of course, this was a phone call with a difference.

It wasn’t made by a human, but by the Google Assistant, which did an uncannily good job of asking the right questions, pausing in the right places, and even throwing in the odd “mmhmm” for realism.

The crowd was shocked, but the most impressive thing was that the person on the receiving end of the call didn’t seem to suspect they were talking to an AI.

It’s a huge technological achievement for Google, but it also opens up a Pandora’s box of ethical and social challenges.




For example, does Google have an obligation to tell people they’re talking to a machine? Does technology that mimics humans erode our trust in what we see and hear?

And is this another example of tech privilege, where those in the know can offload boring conversations they don’t want to have to a machine, while those receiving the calls have to deal with some idiot robot?

In other words, it was a typical Google demo: equal parts wonder and worry.

Many experts working in this area agree, although how exactly you would tell someone they’re speaking to an AI is a tricky question.

If the Assistant starts its calls by saying “hello, I’m a robot” then the receiver is likely to hang up. More subtle indicators could mean limiting the realism of the AI’s voice or including a special tone during calls.

Google tells The Verge it hopes a set of social norms will organically evolve that make it clear when the caller is an AI.

Please like, share and tweet this article.

Pass it on: Popular Science

Facebook Researchers Use AI To Fix Blinking In Photos

Facebook seems to be going all in on building a strong AI-footprint for its platform and now the social media giant has published a new study which focuses on some AI-based tools that fix selfies ruined by blinking.

There are times when we try to take the perfect selfie and while the whole frame turns out exactly as we wanted, our eyes blink in between and ruin the picture.

This is where the new Facebook tool based on AI comes in as it can literally replace your closed eyes with an open pair just by studying as well as analysing your previous pictures.

The idea of opening closed eyes isn’t a new one in a portrait, however, the process involves pulling the source material from another photo directly and then transplanting it onto the blinking face.

Adobe has a similar but way more simplified software called Photoshop Elements that has a mode built for this purpose.




When you use Photoshop Elements, the program prompts you to pick another photo from the same session, since it assumes that you took more than one, in which the same person’s eyes are open.

It then uses Adobe’s AI tech, Sensei, in order to try and blend the open eyes from the previous image directly into the shot with the blink.

The Facebook AI tool is somewhat similar to this and there are also some small details which the Adobe’s software can’t always get right such as specific lighting conditions or directions of shadows.

On the other hand, the Facebook Research methodology, the same process of replacing closed eyes with open ones is dependent on a deep neural network that will supply the missing data while using the context of the area present around the closed eyes.

The Facebook AI tool is somewhat similar to this and there are also some small details which the Adobe’s software can’t always get right such as specific lighting conditions or directions of shadows.

On the other hand, the Facebook Research methodology, the same process of replacing closed eyes with open ones is dependent on a deep neural network that will supply the missing data while using the context of the area present around the closed eyes.

This is done by the general adversarial network (GAN) which is a similar technology that used in deep fake videos in which a person’s face is swapped to another person’s body.

GAN then uses the data points present on other images of the same person for reference in order to fill the data needed.

The Facebook AI tool will also use identifying marks in order to help in generating the substitute data.

After that, a process called in-painting will start in order to come up with all the necessary data which will be used for eyelids with actual eyes and this exactly where the hard work of GAN system comes into play as it will now need more than one image of the person in order to use them as a reference, and try to not miss out on any detail.

When will Facebook introduce this tool to the masses? Will it be launched as a new feature for its social media platforms?

Questions like these are in many, however, the Facebook AI tool is still in development stages and only time will tell us what kind of revolution will the GAN system being to the world of selfies.

Please like, share and tweet this article.

Pass it on: Popular Science

The World’s Fastest Supercomputer Is Back In America

Last week, the US Department of Energy and IBM unveiled Summit, America’s latest supercomputer, which is expected to bring the title of the world’s most powerful computer back to America from China, which currently holds the mantle with its Sunway TaihuLight supercomputer.

With a peak performance of 200 petaflops, or 200,000 trillion calculations per second, Summit more than doubles the top speeds of TaihuLight, which can reach 93 petaflops.

Summit is also capable of over 3 billion billion mixed precision calculations per second, or 3.3 exaops, and more than 10 petabytes of memory, which has allowed researchers to run the world’s first exascale scientific calculation.

The $200 million supercomputer is an IBM AC922 system utilizing 4,608 compute servers containing two 22-core IBM Power9 processors and six Nvidia Tesla V100 graphics processing unit accelerators each.




Summit is also (relatively) energy-efficient, drawing just 13 megawatts of power, compared to the 15 megawatts TaihuLight pulls in.

Top500, the organization that ranks supercomputers around the world, is expected to place Summit atop its list when it releases its new rankings later this month.

Once it does — with these specs — Summit should remain the king of supercomputers for the immediate future.

Oak Ridge National Laboratory — the birthplace of the Manhattan Project — is also home to Titan, another supercomputer that was once the fastest in the world and now holds the title for fifth fastest supercomputer in the world.

Taking up 5,600 square-feet of floor space and weighing in at over 340 tons — which is more than a commercial aircraft — Summit is a truly massive system that would easily fill two tennis courts.

Summit will allow researchers to apply machine learning to areas like high-energy physics and human health, according to ORNL.

Summit’s AI-optimized hardware also gives researchers an incredible platform for analyzing massive datasets and creating intelligent software to accelerate the pace of discovery,” Jeff Nichols, ORNL associate laboratory director for computing and computational sciences, said.

The system is connected by 185 miles of fiber-optic cables and can store 250 petabytes of data, which is equal to 74 years of HD video.

To keep Summit from overheating, more than 4,000 gallons of water are pumped through the system every minute, carrying away nearly 13 megawatts of heat from the system.

While Summit may be the fastest supercomputer in the world, for now, it is expected to be passed by Frontier, a new supercomputer slated to be delivered to ORNL in 2021 with an expected peak performance of 1 exaflop, or 1,000 petaflops.
Please like, share and tweet this article.

Pass it on: Popular Science

Facebook Researchers Used AI To Create A Universal Music Translator

Is Facebook pumping up the volume on what AI can mean to the future of music? You can decide after having a look at what Facebook AI Research scientists have been up to.

A number of sites including The Next Web have reported that they unveiled a neural network capable of translating music from one style, genre, and set of instruments to another.

You can check out their paper, “A Universal Music Translation Network” by authors Noam Mor, Lior Wolf, Adam Polyak, Yaniv Taigman, Facebook AI Research.

A video showing the authors’ supplementary audio samples lets you hear what they did with samples ranging from symphony, string quartet, to sounds of Africa, Elvis and Rihanna samples and even human whistling.

In one example, they said they converted the audio of a Mozart symphony performed by an orchestra to an audio in the style of a pianist playing Beethoven.




Basically, a neural network has been put to work to change the style of music. Listening to samples, one wonders what the AI process is like in figuring out how to carry the music from one work to another?

Does it involve matched pitch? Memorizing musical notes? Greene said no, their approach is an “unsupervised learning method” using “high-level semantics interpretation.

Greene added that you could say “it plays be ear.” The method is unsupervised, in that it does not rely on supervision in the form of matched samples between domains or musical transcriptions, said the team.

Greene also translated, explaining that this was “a complex method of auto-encoding that allows the network to process audio from inputs it’s never been trained on.

In a bigger picture, one can mark the AI attempt to translate styles and instruments as another sure sign of an intersection being crossed between AI and music that can change our pejorative view of “machine” music as inferior and canned.

Please like, share and tweet this article.

Pass it on: Popular Science

At Google I/O 2018, Expect All AI All The Time

 

For Google, its annual I/O developer conference isn’t just a place to show off the next major version of Android and get coders excited about building apps.

Though that stuff is a big part of the show, I/O is also a chance for Google to flex its AI muscle and emphasize its massive reach at a time when every major tech company is racing to best each other in artificial intelligence.

And with its emphasis on cloud-based software and apps, I/O is the most important event of the year for Google—as least as long as its hardware efforts are still such a small fraction of its overall business.




Android P Is For … Probably?

Just like every year, Android will be front and center at the 2018 edition of IO. It’s almost a guarantee that we’ll see a new version of Android P, which was first released as a developer preview in March.

far, we know that a lot of the changes from Android O to P have been visual in nature; notifications have been redesigned, and the quick settings menu has gotten a refresh.

There’s also been a lot of chatter around “Material Design 2,” the next iteration of Google’s unifying design language.

Material Design was first unveiled at I/O four years ago, so it’s quite possible we’ll see the next version’s formal.

Newly redesigned Chrome tabs have already been spotted as part of a possible Material Design refresh, along with references to a “touch optimized” Chrome.

Talkin’ About AI

But artificial intelligence, more than Android and Chrome OS, is likely to be the thread that weaves every platform announcement at I/O together.

Whether that’s in consumer-facing apps like Google Assistant and Google Photos, cloud-based machine learning engines like TensorFlow, or even keynote mentions of AI’s impact on jobs.

Speaking of Google Assistant, late last week Google shared some notable updates around the voice-powered digital helper, which now runs on more than 5,000 devices and even allows you to purchase Fandango tickets with your voice.

That’s all well and fun, but one of the most critical aspects of any virtual assistant (in addition to compatibility) is how easy it is to use.

It wouldn’t be entirely surprising to see Google taking steps to make Assistant that much more accessible, whether that’s through software changes, like “slices” of Assistant content that shows up outside of the app, or hardware changes that involve working with OEM partners to offer more quick-launch solutions.

Google’s day-one keynote kicks off today, Tuesday May 8, at 10 am Pacific time.

Please like, share and tweet this article.

Pass it on: Popular Science

How To Watch Mark Zuckerberg’s Keynote At Facebook’s F8 Developer Conference

Facebook’s annual F8 developer conference kicks off this morning, just roughly a month and a half since the Cambridge Analytica scandal completely redefined the conversation around data privacy and social networking platforms.

That means F8’s keynote address, which in years past has focused on the frontiers of new technology like virtual and augmented reality and artificial intelligence, will also have to reckon with the hard conversations on responsibility and accountability that have made up Facebook’s biggest existential crisis to date.

The whole controversy may have even postponed the company’s plans to reveal its rumored smart speaker, known internally as Portal, at F8 amid fears of Facebook’s overreach and concerns over having the company listening inside consumers’ homes.




Of course, there will be news completely unrelated to Cambridge Analytica. Facebook is expected to talk more about its plans for VR hardware over at Oculus.

We’ll hear more about the company’s push into AR to take on Google and Snapchat since first debuting its intelligent camera platform at last year’s F8.

We’ll also hear more about the company’s secretive Building 8 division, which this time a year ago announced it was working on brain-computer interfaces.

Former DARPA director Regina Dugan has since left her post as head of Building 8, so we’re eager to hear how those more outlandish projects are coming along in her absence.

There’s a keynote on day two that takes place at 1PM ET / 10 AM PT on Wednesday, May 2nd, and that will likely be when we’ll hear more about Building 8 developments.

But the Cambridge Analytica situation has forced Facebook to make radical changes to its developer platform, which makes a developer conference like F8 an especially interesting time to hear how the company plans to move forward with its platform and entice app makers to build products on top of its core service.

Facebook has restricted or shut down numerous high-profile APIs and curtailed developers’ access to user data in a variety of ways, in hopes of preventing future data abuse situations.

So what has typically been a rather quiet, developer-focused affair has been transformed into more of a litmus test for Facebook’s handling of the data privacy scandal.

Naturally, everyone’s eyes will be on CEO Mark Zuckerberg and how he plans to address the elephant in the room when he takes the stage for today’s opening keynote.

If you’re interested in tuning in live and following along with The Verge’s coverage, see below for the best ways to do so.

How to follow along?

Starting time: San Francisco: 10AM / New York: 1PM / London: 6PM / Berlin: 7PM / Moscow: 8PM / New Delhi: 10:30PM / Beijing: 12:30AM (May 2nd) / Tokyo: 2AM (May 2nd) / Sydney: 3AM (May 2nd)

Live stream: Facebook will be live streaming the keynote over on its dedicated F8 website.

Please like, share and tweet this article.

Pass it on: Popular Science

MIT’s New Wearable Device Can ‘Hear’ The Words You Say In Your Head

If you’ve read any sort of science fiction, it’s likely you’ve heard about sub-vocalization, the practice of silently saying words in your head.

It’s common when we read (though it does slow you down), but it’s only recently begun to be used as a way to interact with our computers and mobile devices.

To that end, MIT researchers have created a device you wear on your face that can measure neuromuscular signals that get triggered when you subvocalize.




While the white gadget now looks like some weird medical device strapped to your face, it’s easy to see future applications getting smaller and less obvious, as well as useful with our mobile lives (including Hey Siri and OK Google situations).

The MIT system has electrodes that pick up the signals when you verbalize internally as well as bone-conduction headphones, which use vibrations delivered to the bones of your inner ear without obstructing your ear canal.

The signals are sent to a computer that uses neural networks to distinguish words. So far, the system has been used to do fun things like navigating a Roku, asking for the time and reporting your opponent’s moves in chess to get optimal counter moves via the computer, in utter silence.

The motivation for this was to build an IA device — an intelligence-augmentation device,” said MIT grad student and lead author Arnav Kapur in a statement.

Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?

Please like, share and tweet this article.

Pass it on: Popular Science