Tag: artificial intelligence

Facebook Researchers Use AI To Fix Blinking In Photos

Facebook seems to be going all in on building a strong AI-footprint for its platform and now the social media giant has published a new study which focuses on some AI-based tools that fix selfies ruined by blinking.

There are times when we try to take the perfect selfie and while the whole frame turns out exactly as we wanted, our eyes blink in between and ruin the picture.

This is where the new Facebook tool based on AI comes in as it can literally replace your closed eyes with an open pair just by studying as well as analysing your previous pictures.

The idea of opening closed eyes isn’t a new one in a portrait, however, the process involves pulling the source material from another photo directly and then transplanting it onto the blinking face.

Adobe has a similar but way more simplified software called Photoshop Elements that has a mode built for this purpose.




When you use Photoshop Elements, the program prompts you to pick another photo from the same session, since it assumes that you took more than one, in which the same person’s eyes are open.

It then uses Adobe’s AI tech, Sensei, in order to try and blend the open eyes from the previous image directly into the shot with the blink.

The Facebook AI tool is somewhat similar to this and there are also some small details which the Adobe’s software can’t always get right such as specific lighting conditions or directions of shadows.

On the other hand, the Facebook Research methodology, the same process of replacing closed eyes with open ones is dependent on a deep neural network that will supply the missing data while using the context of the area present around the closed eyes.

The Facebook AI tool is somewhat similar to this and there are also some small details which the Adobe’s software can’t always get right such as specific lighting conditions or directions of shadows.

On the other hand, the Facebook Research methodology, the same process of replacing closed eyes with open ones is dependent on a deep neural network that will supply the missing data while using the context of the area present around the closed eyes.

This is done by the general adversarial network (GAN) which is a similar technology that used in deep fake videos in which a person’s face is swapped to another person’s body.

GAN then uses the data points present on other images of the same person for reference in order to fill the data needed.

The Facebook AI tool will also use identifying marks in order to help in generating the substitute data.

After that, a process called in-painting will start in order to come up with all the necessary data which will be used for eyelids with actual eyes and this exactly where the hard work of GAN system comes into play as it will now need more than one image of the person in order to use them as a reference, and try to not miss out on any detail.

When will Facebook introduce this tool to the masses? Will it be launched as a new feature for its social media platforms?

Questions like these are in many, however, the Facebook AI tool is still in development stages and only time will tell us what kind of revolution will the GAN system being to the world of selfies.

Please like, share and tweet this article.

Pass it on: Popular Science

MIT’s New AI Can See Through Walls

MIT has given a computer x-ray vision, but it didn’t need x-rays to do it. The system, known as RF-Pose, uses a neural network and radio signals to track people through an environment and generate wireframe models in real time.

It doesn’t even need to have a direct line of sight to know how someone is walking, sitting, or waving their arms on the other side of a wall.

Neural networks have shown up in a lot of research lately when researchers need to create a better speech synthesis model, smarter computer vision, or an AI psychopath.

To train a neural network to do any of these things, you need an extensive data set of pre-labeled items.

That usually means using humans to do the labeling, which is simple enough when you’re trying to make an AI that can identify images of cats.




RF-Pose is based on radio waves, and those are much harder for humans to label in a way that makes sense to computers.

The MIT researchers decided to collect examples of people walking with both wireless signal pings and cameras.

The camera footage was processed to generate stick figures in place of the people, and the team matched that data up with the radio waves.

That combined data is what researchers used to train the neural network. With a strong association between the stick figures and RF data, the system is able to create stick figures based on radio wave reflections.

Interestingly, the camera can’t see through walls. So, the system was never explicitly trained in identifying people on the other side of a barrier.

It just works because the radio waves bounce off a person on the other side of a wall just like they do in the same room. This even works with multiple people crossing paths.

The team noted that all subjects in the study consented to have their movements tracked by the AI.

In the real world, there are clear privacy implications. It’s possible a future version of the technology could be configured only to track someone after they perform a specific movement to activate the system and “opt-in.”

As for applications, it’s not just about spying on you through walls. The MIT team suggests RF-Pose could be of use in the medical field where it could track and analyze the way patients with muscle and nerve disorders get around.

It could also enable motion capture in video games — like Kinect but good.

Please like, share and tweet this article.

Pass it on: Popular Science

 

The World’s Fastest Supercomputer Is Back In America

Last week, the US Department of Energy and IBM unveiled Summit, America’s latest supercomputer, which is expected to bring the title of the world’s most powerful computer back to America from China, which currently holds the mantle with its Sunway TaihuLight supercomputer.

With a peak performance of 200 petaflops, or 200,000 trillion calculations per second, Summit more than doubles the top speeds of TaihuLight, which can reach 93 petaflops.

Summit is also capable of over 3 billion billion mixed precision calculations per second, or 3.3 exaops, and more than 10 petabytes of memory, which has allowed researchers to run the world’s first exascale scientific calculation.

The $200 million supercomputer is an IBM AC922 system utilizing 4,608 compute servers containing two 22-core IBM Power9 processors and six Nvidia Tesla V100 graphics processing unit accelerators each.




Summit is also (relatively) energy-efficient, drawing just 13 megawatts of power, compared to the 15 megawatts TaihuLight pulls in.

Top500, the organization that ranks supercomputers around the world, is expected to place Summit atop its list when it releases its new rankings later this month.

Once it does — with these specs — Summit should remain the king of supercomputers for the immediate future.

Oak Ridge National Laboratory — the birthplace of the Manhattan Project — is also home to Titan, another supercomputer that was once the fastest in the world and now holds the title for fifth fastest supercomputer in the world.

Taking up 5,600 square-feet of floor space and weighing in at over 340 tons — which is more than a commercial aircraft — Summit is a truly massive system that would easily fill two tennis courts.

Summit will allow researchers to apply machine learning to areas like high-energy physics and human health, according to ORNL.

Summit’s AI-optimized hardware also gives researchers an incredible platform for analyzing massive datasets and creating intelligent software to accelerate the pace of discovery,” Jeff Nichols, ORNL associate laboratory director for computing and computational sciences, said.

The system is connected by 185 miles of fiber-optic cables and can store 250 petabytes of data, which is equal to 74 years of HD video.

To keep Summit from overheating, more than 4,000 gallons of water are pumped through the system every minute, carrying away nearly 13 megawatts of heat from the system.

While Summit may be the fastest supercomputer in the world, for now, it is expected to be passed by Frontier, a new supercomputer slated to be delivered to ORNL in 2021 with an expected peak performance of 1 exaflop, or 1,000 petaflops.
Please like, share and tweet this article.

Pass it on: Popular Science

Facebook Researchers Used AI To Create A Universal Music Translator

Is Facebook pumping up the volume on what AI can mean to the future of music? You can decide after having a look at what Facebook AI Research scientists have been up to.

A number of sites including The Next Web have reported that they unveiled a neural network capable of translating music from one style, genre, and set of instruments to another.

You can check out their paper, “A Universal Music Translation Network” by authors Noam Mor, Lior Wolf, Adam Polyak, Yaniv Taigman, Facebook AI Research.

A video showing the authors’ supplementary audio samples lets you hear what they did with samples ranging from symphony, string quartet, to sounds of Africa, Elvis and Rihanna samples and even human whistling.

In one example, they said they converted the audio of a Mozart symphony performed by an orchestra to an audio in the style of a pianist playing Beethoven.




Basically, a neural network has been put to work to change the style of music. Listening to samples, one wonders what the AI process is like in figuring out how to carry the music from one work to another?

Does it involve matched pitch? Memorizing musical notes? Greene said no, their approach is an “unsupervised learning method” using “high-level semantics interpretation.

Greene added that you could say “it plays be ear.” The method is unsupervised, in that it does not rely on supervision in the form of matched samples between domains or musical transcriptions, said the team.

Greene also translated, explaining that this was “a complex method of auto-encoding that allows the network to process audio from inputs it’s never been trained on.

In a bigger picture, one can mark the AI attempt to translate styles and instruments as another sure sign of an intersection being crossed between AI and music that can change our pejorative view of “machine” music as inferior and canned.

Please like, share and tweet this article.

Pass it on: Popular Science

At Google I/O 2018, Expect All AI All The Time

 

For Google, its annual I/O developer conference isn’t just a place to show off the next major version of Android and get coders excited about building apps.

Though that stuff is a big part of the show, I/O is also a chance for Google to flex its AI muscle and emphasize its massive reach at a time when every major tech company is racing to best each other in artificial intelligence.

And with its emphasis on cloud-based software and apps, I/O is the most important event of the year for Google—as least as long as its hardware efforts are still such a small fraction of its overall business.




Android P Is For … Probably?

Just like every year, Android will be front and center at the 2018 edition of IO. It’s almost a guarantee that we’ll see a new version of Android P, which was first released as a developer preview in March.

far, we know that a lot of the changes from Android O to P have been visual in nature; notifications have been redesigned, and the quick settings menu has gotten a refresh.

There’s also been a lot of chatter around “Material Design 2,” the next iteration of Google’s unifying design language.

Material Design was first unveiled at I/O four years ago, so it’s quite possible we’ll see the next version’s formal.

Newly redesigned Chrome tabs have already been spotted as part of a possible Material Design refresh, along with references to a “touch optimized” Chrome.

Talkin’ About AI

But artificial intelligence, more than Android and Chrome OS, is likely to be the thread that weaves every platform announcement at I/O together.

Whether that’s in consumer-facing apps like Google Assistant and Google Photos, cloud-based machine learning engines like TensorFlow, or even keynote mentions of AI’s impact on jobs.

Speaking of Google Assistant, late last week Google shared some notable updates around the voice-powered digital helper, which now runs on more than 5,000 devices and even allows you to purchase Fandango tickets with your voice.

That’s all well and fun, but one of the most critical aspects of any virtual assistant (in addition to compatibility) is how easy it is to use.

It wouldn’t be entirely surprising to see Google taking steps to make Assistant that much more accessible, whether that’s through software changes, like “slices” of Assistant content that shows up outside of the app, or hardware changes that involve working with OEM partners to offer more quick-launch solutions.

Google’s day-one keynote kicks off today, Tuesday May 8, at 10 am Pacific time.

Please like, share and tweet this article.

Pass it on: Popular Science

MIT’s New Wearable Device Can ‘Hear’ The Words You Say In Your Head

If you’ve read any sort of science fiction, it’s likely you’ve heard about sub-vocalization, the practice of silently saying words in your head.

It’s common when we read (though it does slow you down), but it’s only recently begun to be used as a way to interact with our computers and mobile devices.

To that end, MIT researchers have created a device you wear on your face that can measure neuromuscular signals that get triggered when you subvocalize.




While the white gadget now looks like some weird medical device strapped to your face, it’s easy to see future applications getting smaller and less obvious, as well as useful with our mobile lives (including Hey Siri and OK Google situations).

The MIT system has electrodes that pick up the signals when you verbalize internally as well as bone-conduction headphones, which use vibrations delivered to the bones of your inner ear without obstructing your ear canal.

The signals are sent to a computer that uses neural networks to distinguish words. So far, the system has been used to do fun things like navigating a Roku, asking for the time and reporting your opponent’s moves in chess to get optimal counter moves via the computer, in utter silence.

The motivation for this was to build an IA device — an intelligence-augmentation device,” said MIT grad student and lead author Arnav Kapur in a statement.

Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?

Please like, share and tweet this article.

Pass it on: Popular Science

A Self-driving Uber In Arizona Kills A Woman In First Fatal Crash Involving Pedestrian

An autonomous Uber car killed a woman in the street in Arizona, police said, in what appears to be the first reported fatal crash involving a self-driving vehicle and a pedestrian in the US.

Tempe police said the self-driving car was in autonomous mode at the time of the crash and that the vehicle hit a woman, who was walking outside of the crosswalk and later died at a hospital.

There was a vehicle operator inside the car at the time of the crash.

Uber said in a statement on Twitter: “Our hearts go out to the victim’s family. We are fully cooperating with local authorities in their investigation of this incident.” A spokesman declined to comment further on the crash.

The company said it was pausing its self-driving car operations in Phoenix, Pittsburgh, San Francisco and Toronto.




Dara Khosrowshahi, Uber’s CEO, tweeted: “Some incredibly sad news out of Arizona. We’re thinking of the victim’s family as we work with local law enforcement to understand what happened.

Uber has been testing its self-driving cars in numerous states and temporarily suspended its vehicles in Arizona last year after a crash involving one of its vehicles, a Volvo SUV.

When the company first began testing its self-driving cars in California in 2016, the vehicles were caught running red lights, leading to a high-profile dispute between state regulators and the San Francisco-based corporation.

Police identified the victim as 49-year-old Elaine Herzberg and said she was walking outside of the crosswalk with a bicycle when she was hit at around 10pm on Sunday. Images from the scene showed a damaged bike.

The 2017 Volvo SUV was traveling at roughly 40 miles an hour, and it did not appear that the car slowed down as it approached the woman, said Tempe sergeant Ronald Elcock.

Elcock said he had watched footage of the collision, which has not been released to the public. He also identified the operator of the car as Rafael Vasquez, 44, and said he was cooperative and there were no signs of impairment.

The self-driving technology is supposed to detect pedestrians, cyclists and others and prevent crashes.

John M Simpson, privacy and technology project director with Consumer Watchdog, said the collision highlighted the need for tighter regulations of the nascent technology.

The robot cars cannot accurately predict human behavior, and the real problem comes in the interaction between humans and the robot vehicles,” said Simpson, whose advocacy group called for a national moratorium on autonomous car testing in the wake of the deadly collision.

Simpson said he was unaware of any previous fatal crashes involving an autonomous vehicle and a pedestrian.

Please like, share and tweet this article.

Pass it on: Popular Science

You’ll Never Dance Alone With This Artificial Intelligence Project

Your next dance partner might not be a person.

A new project from the Georgia Institute of Technology allows people to get jiggy with a computer-controlled dancer, which “watches” the person and improvises its own moves based on prior experiences.

When the human responds, the computerized figure or “virtual character” reacts again, creating an impromptu dance couple based on artificial intelligence (AI).

The LuminAI project is housed inside a 15-foot-tall geodesic dome, designed and constructed by Georgia Tech digital media master’s student Jessica Anderson, and lined with custom-made projection panels for dome projection mapping.

The surfaces allow people to watch their own shadowy avatar as it struts with a virtual character named VAI, which learns how to dance by paying attention to which moves the current user is doing and when.




The more moves it sees, the better and deeper the computer’s dance vocabulary. It then uses this vocabulary as a basis for future improvisation.

The system uses Kinect devices to capture the person’s movement, which is then projected as a digitally enhanced silhouette on the dome’s screens.

The computer analyzes the dance moves being performed and leans on its memory to choose its next move.

The team says this improvisation is one of the most important parts of the project. The avatar recognizes patterns, but doesn’t always react the same way every time.

That means that the person must improvise too, which leads to greater creativity all around. All the while, the computer is capturing these new experiences and storing the information to use as a basis for future dance sessions.

LuminAI was unveiled for the first time this past weekend in Atlanta at the Hambidge Art Auction in partnership with the Goat Farm Arts Center.

It was featured within a dance and technology performance, in a work called Post, as a finalist for the Field Experiment ATL grant. T. Lang Dance performed set choreography with avatars and virtual characters within the dome.

Post is the fourth and final installment of Lang’s Post Up series, which focuses on the stark realities and situational complexities after an emotional reunion between long lost souls.

Please like, share and tweet this article.

Pass it on: New Scientist

Google Clips: A Smart Camera That Doesn’t Make The Grade

Picture this: you’re hanging out with your kids or pets and they spontaneously do something interesting or cute that you want to capture and preserve.

But by the time you’ve gotten your phone out and its camera opened, the moment has passed and you’ve missed your opportunity to capture it.

That’s the main problem that Google is trying to solve with its new Clips camera, a $249 device available starting today that uses artificial intelligence to automatically capture important moments in your life.

Google says it’s for all of the in-between moments you might miss when your phone or camera isn’t in your hand.




It is meant to capture your toddler’s silly dance or your cat getting lost in an Amazon box without requiring you to take the picture.

The other issue Google is trying to solve with Clips is letting you spend more time interacting with your kids directly, without having a phone or camera separating you, while still getting some photos.

That’s an appealing pitch to both parents and pet owners alike, and if the Clips camera system is able to accomplish its goal, it could be a must-have gadget for them.

But if it fails, then it’s just another gadget that promises to make life easier, but requires more work and maintenance than it’s worth.

The problem for Google Clips is it just doesn’t work that well.

Before we get into how well Clips actually works, I need to discuss what it is and what exactly it’s doing because it really is unlike any camera you’ve used before.

At its core, the Clips camera is a hands-free automatic point-and-shoot camera that’s sort of like a GoPro, but considerably smaller and flatter.

It has a cute, unassuming appearance that is instantly recognizable as a camera, or at least an icon of a camera app on your phone.

Google, aware of how a “camera that automatically takes pictures when it sees you” is likely to be perceived, is clearly trying to make the Clips appear friendly, with its white-and-teal color scheme and obvious camera-like styling.

But of those that I showed the camera to while explaining what it’s supposed to do, “it’s creepy” has been a common reaction.

One thing that I’ve discovered is that people know right away it’s a camera and react to it just like other any camera.

That might mean avoiding its view when they see it, or, like in the case of my three-year-old, walking up to it and smiling or picking it up.

That has made it tough to capture candids, since, for the Clips to really work, it needs to be close to its subject.

Maybe over time, your family would learn to ignore it and those candid shots could happen, but in my couple weeks of testing, my family hasn’t acclimated to its presence.

The Clips’ camera sensor can capture 12-megapixel images at 15 frames per second, which it then saves to its 16GB of internal storage that’s good for about 1,400 seven-second clips.

The battery lasts roughly three hours between charges.

Included with the camera is a silicone case that makes it easy to prop up almost anywhere or, yes, clip it to things. It’s not designed to be a body camera or to be worn.

Instead, it’s meant to be placed in positions where it can capture you in the frame as well.

There are other accessories you can buy, like a case that lets you mount the Clips camera to a tripod for more positioning options, but otherwise, using the Clips camera is as simple as turning it on and putting it where you want it.

Once the camera has captured a bunch of clips, you use the app to browse through them on your phone, edit them down to shorter versions, grab still images, or just save the whole thing to your phone’s storage for sharing and editing later.

The Clips app is supposed to learn based on which clips you save and deem “important” and then prioritize capturing similar clips in the future.

You can also hit a toggle to view “suggested” clips for saving, which is basically what the app thinks you’ll like out of the clips it has captured.

Google’s definitely onto something here. The idea is an admirable first step toward a new kind of camera that doesn’t get between me and my kids. But first steps are tricky — ask any toddler!

Usually, after you take your first step, you fall down. To stand back up, Google Clips needs to justify its price, the hassle of setting it up, and the fiddling between it and my phone.

It needs to reassure me that by trusting it and putting my phone away, I won’t miss anything important, and I won’t be burdened by having to deal with a lot of banal captures.

Otherwise, it’s just another redundant gadget that I have to invest too much time and effort into managing to get too little in return.

That’s a lot to ask of a tiny little camera, and this first version doesn’t quite get there. To live up to it all, Clips needs to be both a better camera and a smarter one.

Please like, share and tweet this article.

Pass it on: Popular Science

The Myths Of Robots: They Are Strong, Smart And Evil

Some of today’s top techies and scientists are very publicly expressing their concerns over apocalyptic scenarios that are likely to arise as a result of machines with motives.

Among the fearful are intellectual heavyweights like Stephen Hawking, Elon Musk, and Bill Gates, who all believe that advances in the field of machine learning will soon yield self-aware A.I.s that seek to destroy us.

Or perhaps just apathetically dispose of us, much like scum getting obliterated by a windshield wiper.

In fact, Dr. Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.”




Indeed, there is little doubt that future A.I. will be capable of doing significant damage.

For example, it is conceivable that robots could be programmed to function as tremendously dangerous autonomous weapons unlike any seen before.

Additionally, it is easy to imagine an unconstrained software application that spreads throughout the Internet, severely mucking up our most efficient and relied upon medium for global exchange.

But these scenarios are categorically different from ones in which machines decide to turn on us, defeat us, make us their slaves, or exterminate us.

In this regard, we are unquestionably safe. On a sadder note, we are just as unlikely to someday have robots that decide to befriend us or show us love without being specifically prompted by instructions to do so.

This is because such intentional behavior from an A.I. would undoubtedly require a mind, as intentionality can only arise when something possesses its own beliefs, desires, and motivations.

The type of A.I. that includes these features is known amongst the scientific community as “Strong Artificial Intelligence”. Strong A.I., by definition, should possess the full range of human cognitive abilities.

This includes self-awareness, sentience, and consciousness, as these are all features of human cognition.

On the other hand, “Weak Artificial Intelligence” refers to non-sentient A.I. The Weak A.I. Hypothesis states that our robots—which run on digital computer programs—can have no conscious states, no mind, no subjective awareness, and no agency.

Such A.I. cannot experience the world qualitatively, and although they may exhibit seemingly intelligent behavior, it is forever limited by the lack of a mind.

A failure to recognize the importance of this strong/weak distinction could be contributing to Hawking and Musk’s existential worries, both of whom believe that we are already well on a path toward developing Strong A.I.

To them it is not a matter of “if”, but “when”.

But the fact of the matter is that all current A.I. is fundamentally Weak A.I., and this is reflected by today’s computers’ total absence of any intentional behavior whatsoever.

Although there are some very complex and relatively convincing robots out there that appear to be alive, upon closer examination they all reveal themselves to be as motiveless as the common pocket calculator.

This is because brains and computers work very differently. Both compute, but only one understands—and there are some very compelling reasons to believe that this is not going to change.

It appears that there is a more technical obstacle that stands in the way of Strong A.I. ever becoming a reality.

Please like, share and tweet this article.

Pass it on: Popular Science