Tag: Tech

Facebook Researchers Use AI To Fix Blinking In Photos

Facebook seems to be going all in on building a strong AI-footprint for its platform and now the social media giant has published a new study which focuses on some AI-based tools that fix selfies ruined by blinking.

There are times when we try to take the perfect selfie and while the whole frame turns out exactly as we wanted, our eyes blink in between and ruin the picture.

This is where the new Facebook tool based on AI comes in as it can literally replace your closed eyes with an open pair just by studying as well as analysing your previous pictures.

The idea of opening closed eyes isn’t a new one in a portrait, however, the process involves pulling the source material from another photo directly and then transplanting it onto the blinking face.

Adobe has a similar but way more simplified software called Photoshop Elements that has a mode built for this purpose.




When you use Photoshop Elements, the program prompts you to pick another photo from the same session, since it assumes that you took more than one, in which the same person’s eyes are open.

It then uses Adobe’s AI tech, Sensei, in order to try and blend the open eyes from the previous image directly into the shot with the blink.

The Facebook AI tool is somewhat similar to this and there are also some small details which the Adobe’s software can’t always get right such as specific lighting conditions or directions of shadows.

On the other hand, the Facebook Research methodology, the same process of replacing closed eyes with open ones is dependent on a deep neural network that will supply the missing data while using the context of the area present around the closed eyes.

The Facebook AI tool is somewhat similar to this and there are also some small details which the Adobe’s software can’t always get right such as specific lighting conditions or directions of shadows.

On the other hand, the Facebook Research methodology, the same process of replacing closed eyes with open ones is dependent on a deep neural network that will supply the missing data while using the context of the area present around the closed eyes.

This is done by the general adversarial network (GAN) which is a similar technology that used in deep fake videos in which a person’s face is swapped to another person’s body.

GAN then uses the data points present on other images of the same person for reference in order to fill the data needed.

The Facebook AI tool will also use identifying marks in order to help in generating the substitute data.

After that, a process called in-painting will start in order to come up with all the necessary data which will be used for eyelids with actual eyes and this exactly where the hard work of GAN system comes into play as it will now need more than one image of the person in order to use them as a reference, and try to not miss out on any detail.

When will Facebook introduce this tool to the masses? Will it be launched as a new feature for its social media platforms?

Questions like these are in many, however, the Facebook AI tool is still in development stages and only time will tell us what kind of revolution will the GAN system being to the world of selfies.

Please like, share and tweet this article.

Pass it on: Popular Science

Apple Is Partnering With Pixar As Part Of Its Big Push Into Augmented Reality

Apple announced that it’s partnering with computer animation studio Pixar to boost the company’s augmented reality initiative, the company announced Monday during its annual WWDC conference for app developers.

In iOS 12, we wanted to make an easy way to experience AR across the [eco]system, and to do that we got together with some of the greatest minds in 3D, at Pixar,” Apple senior vice president Craig Federighi said.

Together, Apple and Pixar developed a new file format for AR called “USDZ.” It’s a compact and simple format that’s designed to let people share AR content “while retaining great 3D graphics and even animations.”

The USDZ format is addressing the typically large storage size of AR content, which can make it harder to share information easily and quickly.




 

Companies like Adobe are adopting the USDZ format to work with its Creative Cloud platform, which includes apps like Photoshop and Dimension.

Once iOS 12 is released in the fall, AR content can be shared in the USDZ format in apps like Safari, Messages, and Mail, and can be managed in the Files app.

ARKit 2

Federighi also announced Apple’s latest version of its AR platform, called ARKit 2.

ARKit 2 will offer improved face tracking, more realistic rendering, support for 3D object detection, and the ability to start an AR experience based on a real-world physical object or space.

ARKit 2 will also support shared experiences, where two or more people can play AR games together.

Please like, share and tweet this article.

Pass it on: Popular Science

MIT’s New AI Can See Through Walls

MIT has given a computer x-ray vision, but it didn’t need x-rays to do it. The system, known as RF-Pose, uses a neural network and radio signals to track people through an environment and generate wireframe models in real time.

It doesn’t even need to have a direct line of sight to know how someone is walking, sitting, or waving their arms on the other side of a wall.

Neural networks have shown up in a lot of research lately when researchers need to create a better speech synthesis model, smarter computer vision, or an AI psychopath.

To train a neural network to do any of these things, you need an extensive data set of pre-labeled items.

That usually means using humans to do the labeling, which is simple enough when you’re trying to make an AI that can identify images of cats.




RF-Pose is based on radio waves, and those are much harder for humans to label in a way that makes sense to computers.

The MIT researchers decided to collect examples of people walking with both wireless signal pings and cameras.

The camera footage was processed to generate stick figures in place of the people, and the team matched that data up with the radio waves.

That combined data is what researchers used to train the neural network. With a strong association between the stick figures and RF data, the system is able to create stick figures based on radio wave reflections.

Interestingly, the camera can’t see through walls. So, the system was never explicitly trained in identifying people on the other side of a barrier.

It just works because the radio waves bounce off a person on the other side of a wall just like they do in the same room. This even works with multiple people crossing paths.

The team noted that all subjects in the study consented to have their movements tracked by the AI.

In the real world, there are clear privacy implications. It’s possible a future version of the technology could be configured only to track someone after they perform a specific movement to activate the system and “opt-in.”

As for applications, it’s not just about spying on you through walls. The MIT team suggests RF-Pose could be of use in the medical field where it could track and analyze the way patients with muscle and nerve disorders get around.

It could also enable motion capture in video games — like Kinect but good.

Please like, share and tweet this article.

Pass it on: Popular Science

 

The World’s Fastest Supercomputer Is Back In America

Last week, the US Department of Energy and IBM unveiled Summit, America’s latest supercomputer, which is expected to bring the title of the world’s most powerful computer back to America from China, which currently holds the mantle with its Sunway TaihuLight supercomputer.

With a peak performance of 200 petaflops, or 200,000 trillion calculations per second, Summit more than doubles the top speeds of TaihuLight, which can reach 93 petaflops.

Summit is also capable of over 3 billion billion mixed precision calculations per second, or 3.3 exaops, and more than 10 petabytes of memory, which has allowed researchers to run the world’s first exascale scientific calculation.

The $200 million supercomputer is an IBM AC922 system utilizing 4,608 compute servers containing two 22-core IBM Power9 processors and six Nvidia Tesla V100 graphics processing unit accelerators each.




Summit is also (relatively) energy-efficient, drawing just 13 megawatts of power, compared to the 15 megawatts TaihuLight pulls in.

Top500, the organization that ranks supercomputers around the world, is expected to place Summit atop its list when it releases its new rankings later this month.

Once it does — with these specs — Summit should remain the king of supercomputers for the immediate future.

Oak Ridge National Laboratory — the birthplace of the Manhattan Project — is also home to Titan, another supercomputer that was once the fastest in the world and now holds the title for fifth fastest supercomputer in the world.

Taking up 5,600 square-feet of floor space and weighing in at over 340 tons — which is more than a commercial aircraft — Summit is a truly massive system that would easily fill two tennis courts.

Summit will allow researchers to apply machine learning to areas like high-energy physics and human health, according to ORNL.

Summit’s AI-optimized hardware also gives researchers an incredible platform for analyzing massive datasets and creating intelligent software to accelerate the pace of discovery,” Jeff Nichols, ORNL associate laboratory director for computing and computational sciences, said.

The system is connected by 185 miles of fiber-optic cables and can store 250 petabytes of data, which is equal to 74 years of HD video.

To keep Summit from overheating, more than 4,000 gallons of water are pumped through the system every minute, carrying away nearly 13 megawatts of heat from the system.

While Summit may be the fastest supercomputer in the world, for now, it is expected to be passed by Frontier, a new supercomputer slated to be delivered to ORNL in 2021 with an expected peak performance of 1 exaflop, or 1,000 petaflops.
Please like, share and tweet this article.

Pass it on: Popular Science

Kodak Brings Ektachrome Back to Life

Kodak announced Thursday it will bring back its Ektachrome film, better known as color reversal film.

In 2012, Kodak discontinued its line of color reversal films, which stand out for its fine grain, clean colors, sharp tones and contrasts. At the time, Kodak blamed declining demand for such film.

A year later, Kodak divested its film business to Kodak Alaris, the UK-based company behind Ektachrome’s revival.




Over the next 12 months, Kodak Alaris will be remanufacturing the film at Kodak’s factory in Rochester, N.Y., with the revived film available for both motion picture and photography.

Color reversal film is quite complicated as its recipe is concerned,” says Diane Carroll-Yacoby, Kodak’s world wide portfolio manager for motion picture films says.

A tall tale.

It’s very unique and quite different than a black-and-white film or a color negative film.

“We’re in the process right now of procuring the components that are needed for this special film and in addition to that we are setting up a color reversal processing capability again, which we have to have in order to test the film as we manufacture it.

Into the light.

She adds: “It is a complicated project for us to bring it back but because our customers are telling us that they want it, we’re very excited to do this again. It’s kind of a really special time for us.

Please like, share and tweet this article.

Pass it on: Popular Science

Microsoft Xbox At E3 2018: New Console And Games Coming

Microsoft’s got a new Halo for you.

The hit Xbox action series starring the superhuman Master Chief in his latest adventure to save the galaxy was teased Sunday during the company’s press conference here at the Electronic Entertainment Expo.

Phil Spencer, Microsoft’s head of Xbox, said it will be the character’s “greatest adventure” yet, though the company didn’t say much more than that, nor when it will be released.

The game will be called Halo Infinite.




The new Halo was just the tip of the spear. The day also brought announcements on some 50 games and 20 exclusives designed to show the world the Xbox is the gaming device to buy, even if it’s not the most popular.

To emphasize that, the company wowed attendees at the Microsoft Theater in downtown Los Angeles with a series of announcements about plans for its most popular franchises, including the Gears of War space shooting epic and its hit Cuphead and Ori adventures games.

And if that’s not enough, Microsoft also dropped hints about its next Xbox console, saying teams are “deep into architecting” the next device, though it didn’t give a timetable for a release.

The company also said it’s building a new streaming service designed to allow gamers to play on an Xbox, PC or phone.

The message throughout all of it: Microsoft wants fans to know it hears them.

The company has been criticized for its lack of compelling and exclusive new games, something Nintendo and Sony have been successful at over the past few years.

The top recently released games list on game-review aggregating sister site Metacritic, for example, include Sony’s God of War epic and Nintendo’s update for Donkey Kong.

While Microsoft does have some popular exclusive games of its own, such as Halo and Gears of War, the criticism has grown louder.

That includes exclusive games made by Microsoft. “We always tell our teams to focus on the gamer,” he added. “If fans ask us for exclusives and first-party titles, that’s where we’re going to focus.”

Please like, share and tweet this article.

Pass it on: Popular Science

What To Expect From Apple’s WWDC 2018 Keynote — And What Not To

 

Apple’s WWDC hasn’t historically been a venue associated with a flurry of hardware releases, the 2017 one notwithstanding.

Given Apple’s recent focus on software technologies in health, augmented reality, and virtual reality, there is a decent likelihood that we’ll see very little in the way of new Iron.

Here’s a look at Apple’s current product lineup minus the iPhone and Apple Watch which will probably be updated in September, and what we’re expecting to see from each.

MacOS 10.14, iOS 12, tvOS 12, and watchOS 5 are coming

What says yes: Everything. Apple takes the opportunity it gets at WWDC to show developers, and the world, what’s coming in the next versions of the operating system. There is absolutely nothing suggesting otherwise this year.

It’s not clear how revelatory the new versions will be. Previous rumors suggested that these updates will be about refining the existing versions rolled out last year.




But, given that High Sierra was supposed to do that to Sierra, there’s some room for discussion.

Be careful about your old apps, though. At best, 32-bit apps will have “compromises” according to Apple, and at worst they may not run at all. It might be time to check which apps you rely on are, and aren’t 64-bit.

What says no: Nothing at all. It’s basically a guarantee that the revisions are going to be presented. Like we said, they’re likely to expand on Apple’s burgeoning ambitions in user’s health, and further expand Apple’s ARKit.

MacBook Pro

What says yes: After of over a following the 2015 MacBook Pro, Apple rolled out the 2016 MacBook Pro at the tail end of the year.

It refreshed the line in an uncharacteristic hardware bonanza at the 2017 WWDC, after less than a year in service. And, it’s been a year, so it might be time again.

The updates were relatively modest, with a slightly better CPU and GPU. It seems possible that Apple will do the same at the 2018 WWDC to hit the “back to school” period.

What says no: There isn’t a compelling engineering reason for Apple to do so today.

Instead, it could wait until later in the year or January 2019 for Intel’s chipset that will allow 32GB of LPDDR4 RAM —as the existing ones can’t have more than 16GB of RAM without switching to a more power-hungry chipset.

But then again, this chipset from Intel is two years late already. Apple may not want to wait.

iPad Pro

What says yes: A slew of filings from overseas regulatory agencies suggest that there are iOS devices imminent. Couple this with the last update to the product being a year ago, and the iPad Pro line seems ripe for a refresh.

Time marches on. The 2018 sixth generation iPad is very close to the 2017 iPad Pro lineup in speed, minus some hardware niceties. It might be time to open that lead with a new A11-based processor in the iPad Pro.

What says no: Generally, we’ve seen suggestions from the supply chain and rumors popping out beyond regulatory agency filings that a new model is coming.

This year, there’s been none of that, and a recent report seems to suggest the same.

Please like, share and tweet this article.

Pass it on: Popular Science

Facebook Researchers Used AI To Create A Universal Music Translator

Is Facebook pumping up the volume on what AI can mean to the future of music? You can decide after having a look at what Facebook AI Research scientists have been up to.

A number of sites including The Next Web have reported that they unveiled a neural network capable of translating music from one style, genre, and set of instruments to another.

You can check out their paper, “A Universal Music Translation Network” by authors Noam Mor, Lior Wolf, Adam Polyak, Yaniv Taigman, Facebook AI Research.

A video showing the authors’ supplementary audio samples lets you hear what they did with samples ranging from symphony, string quartet, to sounds of Africa, Elvis and Rihanna samples and even human whistling.

In one example, they said they converted the audio of a Mozart symphony performed by an orchestra to an audio in the style of a pianist playing Beethoven.




Basically, a neural network has been put to work to change the style of music. Listening to samples, one wonders what the AI process is like in figuring out how to carry the music from one work to another?

Does it involve matched pitch? Memorizing musical notes? Greene said no, their approach is an “unsupervised learning method” using “high-level semantics interpretation.

Greene added that you could say “it plays be ear.” The method is unsupervised, in that it does not rely on supervision in the form of matched samples between domains or musical transcriptions, said the team.

Greene also translated, explaining that this was “a complex method of auto-encoding that allows the network to process audio from inputs it’s never been trained on.

In a bigger picture, one can mark the AI attempt to translate styles and instruments as another sure sign of an intersection being crossed between AI and music that can change our pejorative view of “machine” music as inferior and canned.

Please like, share and tweet this article.

Pass it on: Popular Science

What Was the First Internet Meme?

Today’s internet is a churning sea of funny images, remixes of those images, parodies of those remixes of those images, and cat videos.

But before there was the Y U NO Guy or the Ermagerd girl, there was an animation that would come to change the internet forever. It was called Baby Cha-Cha-Cha, also known as The Dancing Baby.

Hooked on a Feeling

In 1996, Autodesk animator Michael Girard and his colleague Robert Lurye wanted to show that you could program and direct the same movement with different characters using computers.

They designed an animation of a baby doing a series of complicated cha-cha dance moves. Autodesk soon sent the animation around to other development and animation companies, presumably to show off.




When it got to LucasArts, developer Ron Lussier made a few changes to the original file and turned it into an animated GIF, which he sent to a few coworkers…who sent it to friends, who sent it to friends, and on and on.

That GIF soon took the pop culture world by storm, showing up in advertisements, on merchandise, and on TV shows, including a few infamous episodes of “Ally McBeal“.

Creepy Is As Creepy Does

When it comes to internet memes, the evolution is key: it can’t just be popular; it has to change. Variations on The Dancing Baby have appeared on “The Simpsons“, danced to “Gangnam Style“, and an endless array of products.

Girard told Great Big Story that he thinks the animation’s popularity comes down to its creepy factor. It resides in the uncanny valley, that perceptual spot where it almost looks human, but not quite.

It’s the kind of nightmarish, dead-eyed thing you’d drop-kick if you saw it in real life.

So how does the internet’s original Dr. Frankenstein feel about his creation today? Girard was asked if he regretted making The Dancing Baby. His answer? “Yes. 100 percent.”

Please like, share and tweet this artcle.

Pass it on: Popular Science

Changing Lanes Is Simple For Human Drivers. Not So For Autonomous Cars.

A driver sits engrossed in her laptop screen, catching up on emails as the car barrels down the highway. In the next lane, a father helps his kids finish homework while their vehicle swiftly changes lanes.

Nearby, an empty car returns home after dropping off its owner.—

These are the self-driving cars in which humans can be mindlessly commuting in as few as five years, some ambitious estimates claim.

It’s a highly disruptive technology that’s coming on a lot faster than people expect,” says Barrie Kirk, executive director of the Canadian Automated Vehicles Centre of Excellence.

He helps governments and companies prepare for the advent of automated vehicles.

Many automakers and tech firms have already entered the driverless car manufacturing game. Now it’s a race to perfect the technology and start selling these Knight Rider-style vehicles.




Companies hype the cars as the best safety feature since seatbelts and airbags, but there’s a sense that phasing driverless cars onto public roads may be anything but a smooth transition.

Self-driving car advocates, like Kirk, believe in the technology’s potential to save thousands of lives.

Humans, generally, are poor drivers,” he says. He would like to see human drivers banned from roads to make room for an all-automated-vehicle world.

Drivers’ mistakes are responsible for more than 90 per cent of crashes, the U.S. National Highway Traffic Safety Administration found.

Kirk hopes automated vehicles can eliminate 80 per cent of such collisions — a number often cited by advocates.

In 2012, 2,077 people died in car crashes on Canadian roads, according to Transport Canada. If Kirk’s estimate holds, about 1,500 of those victims could have avoided an accident.

If you’re got a whole bunch of sensors that give you a 360-degree scan, 30 times a second,” he says, “humans can not come anywhere close to that.

There will be time to adjust before the new fleet of robot cars takes over roads.

We’re not going to be in a situation where we go from no automation to fully autonomous or self-driving vehicles,” says David Adams, president of the Global Automakers of Canada.

Some people already own low-level autonomous vehicles, like ones that parallel park once the driver has properly aligned it. Some U.K. cities have started experimenting with low-speed self-driving shuttles on closed streets.

Even if safety is somewhat disputed, there are other potential benefits that can make the pursuit of these cars worth it.

Seniors, disabled people and others unable to drive will gain mobility. Families may need to own fewer cars if vehicles can travel empty to pick up and drop off family members.

Cities may require fewer parking spaces if cars can return home after dropping off owners.

But to see all those benefits and ensure safety isn’t compromised, these cars must be carefully brought into the public realm, says Shladover.

It has to be done in a sensible way.

Please like, share and tweet this article.

Pass it on: Popular Science