Tag: artificial intelligence

How The Media Gets AI Alarmingly Wrong

In June of last year, five researchers at Facebook’s Artificial Intelligence Research unit published an article showing how bots can simulate negotiation-like conversations.

While for the most part the bots were able to maintain coherent dialogue, the researchers found that the software agents would occasionally generate strange sentences like: “Balls have zero to me to me to me to me to me to me to me to.

On seeing these results, the team realized that they had failed to include a constraint that limited the bots to generating sentences within the parameters of spoken English, meaning that they developed a type of machine-English patois to communicate between themselves.

These findings were considered to be fairly interesting by other experts in the field, but not totally surprising or groundbreaking.

A month after this initial research was released, Fast Company published an article entitled AI Is Inventing Language Humans Can’t Understand. Should We Stop It?




The story focused almost entirely on how the bots occasionally diverged from standard English – which was not the main finding of the paper – and reported that after the researchers “realized their bots were chattering in a new language” they decided to pull the plug on the whole experiment, as if the bots were in some way out of control.

The ice of AI’s first winter only fully retreated at the beginning of this decade after a new generation of researchers started publishing papers about successful applications of a technique called “deep learning”.

While this was fundamentally a decades-old statistical technique similar to Rosenblatt’s perceptron, increases in computational power and availability of huge data sets meant that deep learning was becoming practical for tasks such as speech recognition, image recognition and language translation.

As reports of deep learning’s “unreasonable effectiveness” circulated among researchers, enrollments at universities in machine-learning classes surged, corporations started to invest billions of dollars to find talent familiar with the newest techniques, and countless startups attempting to apply AI to transport or medicine or finance were founded.

As this resurgence got under way, AI hype in the media resumed after a long hiatus.

In 2013, John Markoff wrote a feature in the New York Times about deep learning and neural networks with the headline Brainlike Computers, Learning From Experience.

Not only did the title recall the media hype of 60 years earlier, so did some of the article’s assertions about what was being made possible by the new technology.

Since then, far more melodramatic and overblown articles about “AI apocalypse”, “artificial brains”, “artificial superintelligence” and “creepy Facebook bot AIs” have filled the news feed daily.

Please like, share and tweet this article.

Pass ot on: Popular Science

Your Child’s New Robot buddy Makes Reading A Lot More Fun

Children are more enthusiastic about reading aloud when they do so to a specially designed robot, psychologists have found.

A team at the University of Wisconsin-Madison built the robot, named Minnie, as a “reading buddy” to children aged 10 to 12.

Over two weeks children became more excited about books and attached to the robot. “After one interaction the kids were generally telling us that it was nice to have someone to read with,” Joseph Michaelis, who led the study, said.




By the end of two weeks they’re talking about how the robot was funny and silly and how they’d come home looking forward to seeing it.”

The research, published in the journal Science Robotics, is the latest in the effort to design machines that may augment learning or provide companionship.

In America, a robot with artificial intelligence acts as a “social mediator” for autistic children, allowing them to communicate with the wider world.

Minnie stayed with study subjects for two weeks.

Minnie, which is 33cm (13in) high, tracked the children’s progress in reading and every few pages reacted with a programmed comment. During a frightening chapter, for instance, it could say: “Oh, wow, I’m really scared.”

It also recommended books, taking into account ability and interests, and most said that it made good choices.

Please like, share and tweet this article.

Pass it on: Popular Science

Facebook And NYU Want To Use AI To Make MRI Exams 10 Times Faster

MRI scans may some day be available for a lot more people in need.

Facebook on Monday said it’s teaming up with NYU School of Medicine’s Department of Radiology to launch “fastMRI,” a collaborative research project that aims to use artificial intelligence to make MRI — magnetic resonance imaging that is 10 times faster.

Doctors and radiologists use MRI scanners to produce images that show in detail a patient’s organs, blood vessels, bones, soft issues and such, which helps doctors diagnose problems.

However, completing a MRI scan can take from 15 minutes to over an hour, according to Facebook’s blog post.

That’s challenging for children and patients in a lot of pain, who can’t lie still for a long time. It also limits how many scans the hospital can do in a day.




If the project succeeds, MRI scans could be completed in about five minutes, thus making time for more people in need to receive scans.

The idea is to actually capture less data during MRI scans, making them faster, and then use AI to “fill in views omitted from the accelerated scan,” Facebook said in its blog post. The challenge is doing this without missing any important details.

Facebook Artificial Intelligence Research, or FAIR, will work with NYU medical researchers to train artificial neural networks to recognize the structures of human body.

The project will use image data from 10,000 clinical cases with roughly 3 million MRIs of the knee, brain and liver. Patients’ names and medical information aren’t included.

We hope one day that because of this project, MRI will be able to replace a x-rays for many applications, also leading to decreased radiation exposure to patients,” said Michael Recht, MD, chair of department of radiology at NYU School of Medicine, in an email statement.

Our collaboration is one between academia and industry in which we can leverage our complementary strengths to achieve a real-world result.

Please like, share and tweet this article.

Pass it on: Popular Science

Aliens May Actually Be Billion-Year-Old Robots

This could ruin a lot of good science fiction movies … and create interesting plots for the next generation of them, not to mention influencing how humans deal with space aliens when they first encounter each other.

A timely article by The Daily Galaxy reviews the study “Alien Minds” by Susan Schneider where the professor and author discusses her theory that our first meeting with an extraterrestrial will be with a billion-year-old robot. Wait, what?

“I do not believe that most advanced alien civilizations will be biological. The most sophisticated civilizations will be postbiological, forms of artificial intelligence or alien superintelligence.”

Susan Schneider is an associate Professor in the Department of Philosophy Cognitive Science Program at the University of Connecticut.

Alien Minds” has been presented at NASA and the 2016 IdeaFestival in Kentucky and was published in The Impact of Discovering Life Beyond Earth.




It is her response to the question: “How would intelligent aliens think? Would they have conscious experiences? Would it feel a certain way to be an alien?

“I actually think the first discovery of life on other planets will probably involve microbial life; I am concentrating on intelligent life in my work on this topic though. I only claim that the most advanced civilizations will likely be post biological.”

Schneider’s theory is based on three components or “observations.”

In her “short window observation,” she presents the idea that a civilization or species that can conquer long-distance space travel is already very close to moving from biological to artificially-intelligent beings.

An example of this “short window” is the relatively brief 120 years it took humans to go from the first radio signals to cell phones.

Some of those species will be much older than us, which is Schneider’s “the greater age of alien civilizations” observation – one accepted by many.

And not just a few generations older but billions of years beyond us, making them far more advanced and intelligent. How much more?

Schneider’s last observation is that any species that can travel to Earth will be intelligent enough to develop robots that they can upload their brains

to. The robots would probably be silicon-based for speed of ‘thinking’ and durability, making them nearly immortal.

Please like, share and tweet this article.

Pass it on: Popular Science

Look Out, Wiki-Geeks. Now Google Trains AI To Write Wikipedia Articles

A team within Google Brain – the web giant’s crack machine-learning research lab – has taught software to generate Wikipedia-style articles by summarizing information on web pages… to varying degrees of success.

As we all know, the internet is a never ending pile of articles, social media posts, memes, joy, hate, and blogs. It’s impossible to read and keep up with everything.

Using AI to tell pictures of dogs and cats apart is cute and all, but if such computers could condense information down into useful snippets, that would be really be handy. It’s not easy, though.

A paper, out last month and just accepted for this year’s International Conference on Learning Representations (ICLR) in April, describes just how difficult text summarization really is.




A few companies have had a crack at it. Salesforce trained a recurrent neural network with reinforcement learning to take information and retell it in a nutshell, and the results weren’t bad.

However, the computer-generated sentences are simple and short; they lacked the creative flair and rhythm of text written by humans. Google Brain’s latest effort is slightly better: the sentences are longer and seem more natural.

Here’s an example for the topic: Wings over Kansas, an aviation website for pilots and hobbyists.

The paragraph on the left is a computer-generated summary of the organization, and the one on the right is taken from the Wikipedia page on the subject.

The model works by taking the top ten web pages of a given subject – excluding the Wikipedia entry – or scraping information from the links in the references section of a Wikipedia article.

Most of the selected pages are used for training, and a few are kept back to develop and test the system.

Also, since it relies on the popularity of the first ten websites on the internet for any particular topic, if those sites aren’t particularly credible, the resulting handiwork probably won’t be very accurate either.

You can’t trust everything you read online, of course.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s AI Sounds Like A Human On The Phone

It came as a total surprise: the most impressive demonstration at Google’s I/O conference yesterday was a phone call to book a haircut. Of course, this was a phone call with a difference.

It wasn’t made by a human, but by the Google Assistant, which did an uncannily good job of asking the right questions, pausing in the right places, and even throwing in the odd “mmhmm” for realism.

The crowd was shocked, but the most impressive thing was that the person on the receiving end of the call didn’t seem to suspect they were talking to an AI.

It’s a huge technological achievement for Google, but it also opens up a Pandora’s box of ethical and social challenges.




For example, does Google have an obligation to tell people they’re talking to a machine? Does technology that mimics humans erode our trust in what we see and hear?

And is this another example of tech privilege, where those in the know can offload boring conversations they don’t want to have to a machine, while those receiving the calls have to deal with some idiot robot?

In other words, it was a typical Google demo: equal parts wonder and worry.

Many experts working in this area agree, although how exactly you would tell someone they’re speaking to an AI is a tricky question.

If the Assistant starts its calls by saying “hello, I’m a robot” then the receiver is likely to hang up. More subtle indicators could mean limiting the realism of the AI’s voice or including a special tone during calls.

Google tells The Verge it hopes a set of social norms will organically evolve that make it clear when the caller is an AI.

Please like, share and tweet this article.

Pass it on: Popular Science

Facebook Researchers Use AI To Fix Blinking In Photos

Facebook seems to be going all in on building a strong AI-footprint for its platform and now the social media giant has published a new study which focuses on some AI-based tools that fix selfies ruined by blinking.

There are times when we try to take the perfect selfie and while the whole frame turns out exactly as we wanted, our eyes blink in between and ruin the picture.

This is where the new Facebook tool based on AI comes in as it can literally replace your closed eyes with an open pair just by studying as well as analysing your previous pictures.

The idea of opening closed eyes isn’t a new one in a portrait, however, the process involves pulling the source material from another photo directly and then transplanting it onto the blinking face.

Adobe has a similar but way more simplified software called Photoshop Elements that has a mode built for this purpose.




When you use Photoshop Elements, the program prompts you to pick another photo from the same session, since it assumes that you took more than one, in which the same person’s eyes are open.

It then uses Adobe’s AI tech, Sensei, in order to try and blend the open eyes from the previous image directly into the shot with the blink.

The Facebook AI tool is somewhat similar to this and there are also some small details which the Adobe’s software can’t always get right such as specific lighting conditions or directions of shadows.

On the other hand, the Facebook Research methodology, the same process of replacing closed eyes with open ones is dependent on a deep neural network that will supply the missing data while using the context of the area present around the closed eyes.

The Facebook AI tool is somewhat similar to this and there are also some small details which the Adobe’s software can’t always get right such as specific lighting conditions or directions of shadows.

On the other hand, the Facebook Research methodology, the same process of replacing closed eyes with open ones is dependent on a deep neural network that will supply the missing data while using the context of the area present around the closed eyes.

This is done by the general adversarial network (GAN) which is a similar technology that used in deep fake videos in which a person’s face is swapped to another person’s body.

GAN then uses the data points present on other images of the same person for reference in order to fill the data needed.

The Facebook AI tool will also use identifying marks in order to help in generating the substitute data.

After that, a process called in-painting will start in order to come up with all the necessary data which will be used for eyelids with actual eyes and this exactly where the hard work of GAN system comes into play as it will now need more than one image of the person in order to use them as a reference, and try to not miss out on any detail.

When will Facebook introduce this tool to the masses? Will it be launched as a new feature for its social media platforms?

Questions like these are in many, however, the Facebook AI tool is still in development stages and only time will tell us what kind of revolution will the GAN system being to the world of selfies.

Please like, share and tweet this article.

Pass it on: Popular Science

MIT’s New AI Can See Through Walls

MIT has given a computer x-ray vision, but it didn’t need x-rays to do it. The system, known as RF-Pose, uses a neural network and radio signals to track people through an environment and generate wireframe models in real time.

It doesn’t even need to have a direct line of sight to know how someone is walking, sitting, or waving their arms on the other side of a wall.

Neural networks have shown up in a lot of research lately when researchers need to create a better speech synthesis model, smarter computer vision, or an AI psychopath.

To train a neural network to do any of these things, you need an extensive data set of pre-labeled items.

That usually means using humans to do the labeling, which is simple enough when you’re trying to make an AI that can identify images of cats.




RF-Pose is based on radio waves, and those are much harder for humans to label in a way that makes sense to computers.

The MIT researchers decided to collect examples of people walking with both wireless signal pings and cameras.

The camera footage was processed to generate stick figures in place of the people, and the team matched that data up with the radio waves.

That combined data is what researchers used to train the neural network. With a strong association between the stick figures and RF data, the system is able to create stick figures based on radio wave reflections.

Interestingly, the camera can’t see through walls. So, the system was never explicitly trained in identifying people on the other side of a barrier.

It just works because the radio waves bounce off a person on the other side of a wall just like they do in the same room. This even works with multiple people crossing paths.

The team noted that all subjects in the study consented to have their movements tracked by the AI.

In the real world, there are clear privacy implications. It’s possible a future version of the technology could be configured only to track someone after they perform a specific movement to activate the system and “opt-in.”

As for applications, it’s not just about spying on you through walls. The MIT team suggests RF-Pose could be of use in the medical field where it could track and analyze the way patients with muscle and nerve disorders get around.

It could also enable motion capture in video games — like Kinect but good.

Please like, share and tweet this article.

Pass it on: Popular Science

 

The World’s Fastest Supercomputer Is Back In America

Last week, the US Department of Energy and IBM unveiled Summit, America’s latest supercomputer, which is expected to bring the title of the world’s most powerful computer back to America from China, which currently holds the mantle with its Sunway TaihuLight supercomputer.

With a peak performance of 200 petaflops, or 200,000 trillion calculations per second, Summit more than doubles the top speeds of TaihuLight, which can reach 93 petaflops.

Summit is also capable of over 3 billion billion mixed precision calculations per second, or 3.3 exaops, and more than 10 petabytes of memory, which has allowed researchers to run the world’s first exascale scientific calculation.

The $200 million supercomputer is an IBM AC922 system utilizing 4,608 compute servers containing two 22-core IBM Power9 processors and six Nvidia Tesla V100 graphics processing unit accelerators each.




Summit is also (relatively) energy-efficient, drawing just 13 megawatts of power, compared to the 15 megawatts TaihuLight pulls in.

Top500, the organization that ranks supercomputers around the world, is expected to place Summit atop its list when it releases its new rankings later this month.

Once it does — with these specs — Summit should remain the king of supercomputers for the immediate future.

Oak Ridge National Laboratory — the birthplace of the Manhattan Project — is also home to Titan, another supercomputer that was once the fastest in the world and now holds the title for fifth fastest supercomputer in the world.

Taking up 5,600 square-feet of floor space and weighing in at over 340 tons — which is more than a commercial aircraft — Summit is a truly massive system that would easily fill two tennis courts.

Summit will allow researchers to apply machine learning to areas like high-energy physics and human health, according to ORNL.

Summit’s AI-optimized hardware also gives researchers an incredible platform for analyzing massive datasets and creating intelligent software to accelerate the pace of discovery,” Jeff Nichols, ORNL associate laboratory director for computing and computational sciences, said.

The system is connected by 185 miles of fiber-optic cables and can store 250 petabytes of data, which is equal to 74 years of HD video.

To keep Summit from overheating, more than 4,000 gallons of water are pumped through the system every minute, carrying away nearly 13 megawatts of heat from the system.

While Summit may be the fastest supercomputer in the world, for now, it is expected to be passed by Frontier, a new supercomputer slated to be delivered to ORNL in 2021 with an expected peak performance of 1 exaflop, or 1,000 petaflops.
Please like, share and tweet this article.

Pass it on: Popular Science

Facebook Researchers Used AI To Create A Universal Music Translator

Is Facebook pumping up the volume on what AI can mean to the future of music? You can decide after having a look at what Facebook AI Research scientists have been up to.

A number of sites including The Next Web have reported that they unveiled a neural network capable of translating music from one style, genre, and set of instruments to another.

You can check out their paper, “A Universal Music Translation Network” by authors Noam Mor, Lior Wolf, Adam Polyak, Yaniv Taigman, Facebook AI Research.

A video showing the authors’ supplementary audio samples lets you hear what they did with samples ranging from symphony, string quartet, to sounds of Africa, Elvis and Rihanna samples and even human whistling.

In one example, they said they converted the audio of a Mozart symphony performed by an orchestra to an audio in the style of a pianist playing Beethoven.




Basically, a neural network has been put to work to change the style of music. Listening to samples, one wonders what the AI process is like in figuring out how to carry the music from one work to another?

Does it involve matched pitch? Memorizing musical notes? Greene said no, their approach is an “unsupervised learning method” using “high-level semantics interpretation.

Greene added that you could say “it plays be ear.” The method is unsupervised, in that it does not rely on supervision in the form of matched samples between domains or musical transcriptions, said the team.

Greene also translated, explaining that this was “a complex method of auto-encoding that allows the network to process audio from inputs it’s never been trained on.

In a bigger picture, one can mark the AI attempt to translate styles and instruments as another sure sign of an intersection being crossed between AI and music that can change our pejorative view of “machine” music as inferior and canned.

Please like, share and tweet this article.

Pass it on: Popular Science