Tag: artificial intelligence

Google’s AI Found An Overlooked Exoplanet

NASA has discovered an eighth planet around a distant star, which means we’re no longer the largest solar system we know of.

The discovery was made thanks to some artificial intelligence help from Google, which found the planet by scouring previously overlooked “weak” signals in data captured by the Kepler Space Telescope.

The newly found planet is located in the solar system around Kepler-90, a star about 2,500 light-years away from Earth that was previously discovered in 2014.

The Kepler Space Telescope has been searching the galactic sky for exoplanets, or planets outside our own Solar System, since it launched in 2009.

In order to sift through all the data that it’s captured since that launch, scientists usually look at the strongest signals first.




And that process has worked well enough so far. NASA has confirmed 2,525 exoplanets in that time, a number that has changed our understanding of how common it is to find planets around the stars that make up our galaxy.

Recently, though, artificial intelligence has become a more prominent tool in astronomy.

Scientists, including ones who work on the Kepler data, have increasingly turned to machine learning to help sort through typically lower-priority data to see what they might have missed.

In the process, they found an overlooked planet that’s now named Kepler-90i.

But while we now know that Kepler-90 has the same number of orbiting planets as our Sun, the solar system is a poor candidate in the search for extraterrestrial life or at least, life as we know it.

Kepler-90 is about 20 percent bigger and 5 percent warmer than our Sun. And its eight planets dance around the star in much closer orbits than the ones in our own Solar System.

In fact, their orbits are so comparatively small that seven of Kepler-90’s eight planets would fit in between the Earth and the Sun.

The discovery of Kepler-90i, came after NASA let Google train its machine learning algorithms on 15,000 signals from potential planets in the Kepler database.

The scientists then took the trained system and set it to work on data from 670 stars that were already known to have multiple planets, as they considered those to be the most likely hiding places.

The newly discovered planet in Kepler-90, along with one other found in the Kepler-80 solar system announced today, are the first NASA was able to confirm from these new results from Google’s AI.

The inclusion of machine learning in this process shouldn’t scare humans whose livelihood revolves around discovering and studying exoplanets, according to Chris Shallue, a senior Google AI software engineer who worked on the project.

What we’ve developed here is a tool to help astronomers have more impact,” Shallue said on a conference call about the news.

It’s a way to increase the productivity of astronomers. It certainly won’t replace them at all.

Please like, share and tweet this article.

Pass it on: Popular Science

Drone Race: Human Versus Artificial Intelligence

JPL engineers recently finished developing three drones and the artificial intelligence needed for them to navigate an obstacle course by themselves.

In October, NASA’s California-based Jet Propulsion Laboratory pitted a drone controlled by artificial intelligence against a professional human drone pilot named Ken Loo.

According to NASA’s press release, it had been researching autonomous drone technology for the past two years at that point, funded by Google and its interest in JPL’s vision-based navigation work.

The race consisted of a time-trial where the lap times and behaviors of both the A.I.-operated drone and the manually-piloted drone were analyzed and compared. Let’s take a look at the results.

NASA said in its release that the company developed three drones; Batman, Joker, and Nightwing.

Researchers focused mostly on the intricate algorithms required to navigate efficiently through a race like this, namely obstacle avoidance and maximum speed through narrow environments.




These algorithms were then combined with Google’s Tango technology, which JPL had a significant hand in as well.

Task Manager of the JPL project, Rob Reid said, “We pitted our algorithms against a human, who flies a lot more by feel.”

“You can actually see that the A.I. flies the drone smoothly around the course, whereas human pilots tend to accelerate aggressively, so their path is jerkier.”

As it turned out, Loo’s speeds were much higher, and he was able to perform impressive aerial maneuvers to his benefit, but the A.I.-infused drones were more consistent, and never gave in to fatigue.

“This is definitely the densest track I’ve ever flown,” said Loo. “One of my faults as a pilot is I get tired easily. When I get mentally fatigued, I start to get lost, even if I’ve flown the course 10 times.”

Loo averaged 11.1 seconds per lap, while the autonomous unmanned aerial vehicles average 13.9 seconds.

In other words, while Loo managed to reach higher speeds overall, the drones operating autonomously were more consistent, essentially flying a very similar lap and route each time.

Our autonomous drones can fly much faster,” said Reid. “One day you might see them racing professionally!

Of that latter statement, there’s certainly no doubt.

A future where companies like Google and NASA square off in public arenas where their autonomous drones compete against one another is definitely plausible.

It wouldn’t be shocking to see such an event televised, either, as we’re already seeing similar results with the Drone Racing League.

Please like, share and tweet this article.

Pass it on: Popular Science

A Smart City In China Uses AI To Track Every Movement Of Citizens

Chinese e-commerce giant Alibaba Group Holding Limited is aiding the Chinese police state in catching people who break the law, tracking criminals in real time in their new “smart city” of Hangzhou, home to 9 million people.

They are using video feeds and artificial intelligence, tracking things as petty as illegal parking in real time, putting the city under total surveillance.

Using hundreds of thousands of cameras located across the city and artificial intelligence, they were able to do a lot: for the people who control the city, not the residents.

However, the police state implications are the last thing to be mentioned by mainstream science articles covering the issue.

They are falling for it: because if you disregard the immorality of the Chinese government and its laws, and the danger of total surveillance, traffic congestion is allegedly down and other aspects of city life are allegedly more efficient now.




But does efficiency equal happiness for the people, or more profit for those who control the people?

“The stated goal was to improve life in Hangzhou by letting artificial intelligence process this data and use it to control aspects of urban life.”

“It seems to have worked. The trial has been so successful that the company is now packaging the system for export to other places in China – and eventually the rest of the world.

“Using AI to optimise Hangzhou has had many positive effects. Traffic congestion is down, road accidents are automatically detected and responded to faster, and illegal parking is tracked in real time.”

“If someone breaks the law, they too can be tracked throughout the city before being picked up by the police.”

If everything in your city was this tightly controlled, how would you be happy? How could anybody be happy in a “smart city?”

Efficiency does not equal happiness. Life is not improved by efficiency, or even money necessarily. Human happiness cannot possibly be acquired at the expense of everything that a “smart city” would destroy.

Invasive laws in many countries are tolerable now, because they are broken without consequence. If you smoked cannabis illegally, and you would be immediately caught if you tried to smoke for example, would you be happy?

The founder of the company creating this “city brain project” is certainly not subject to the same surveillance that the residents of Hangzhou are.

He’s living it up, a billionaire who is trying to become a movie star. Recently headlines about Alibaba founder Jack Ma read “Billionaire Alibaba founder Jack Ma is going to be a movie star next. Literally.”

An executive from this corporation had the audacity to speak of privacy as if it was some trivial, silly thing that only paranoid people need.

“In China, people have less concern with privacy, which allows us to move faster,” said Xian-Sheng Hua, manager of artificial intelligence at Alibaba, speaking at World Summit AI recently.

Please like, share and tweet this article.

Pass it on: New Scientist

Adobe And Stanford Just Taught AI To Edit Better Videos Than You

Just one minute of video typically takes several hours of editing — but Stanford and Adobe researchers have developed an artificial intelligence (AI) program that partially automates the editing process, while still giving the user creative control over the final result.

The program starts by organizing all of the footage, which is often from multiple takes and camera angles. Those clips are matched to the script, so it’s easy to find several video options for each line of dialogue.

The program then works to recognize exactly what is inside those clips. Using facial recognition alongside emotion recognition and other computational imaging effects, the program determines what is in each frame.




For example, the program flags whether the shot is a wide-angle or a close-up and which characters the shot includes.

With everything organized, the video editor then instructs the program in just how the videos should be edited using different styles and techniques the researchers call idioms.

For example, a common style is to show the face of the character during their lines. If the editor wants that to happen, he or she just drags that idiom over.

The idioms can also be negative. For example, the idiom “avoid jump cuts,” can be added to actually avoid them, or negatively to intentionally add jump cuts whenever possible.

The editor can drag over multiple idioms to instruct the program on an editing style.

In a video demonstrating the technology, the researchers created a cinematic edit by using idioms that tell the software to keep the speaker visible while talking, to start with a wide-angle shot, to mix with close-ups and to avoid jump cuts.

To edit the video in a completely different, fast-paced style, the researchers instead dragged over idioms for including jump cuts, using fast performance, and keeping the zoom consistent.

Editing styles can be saved to recall later, and with the idioms in place, a stylized video edit is generated with a click. Alternative clips are arranged next to the computer’s edit so editors can quickly adjust if something’s not quite right.

The program speeds up video editing using artificial intelligence, but also allows actual humans to set the creative parameters in order to achieve a certain style.

The researchers did acknowledge a few shortcomings of the program. The system is designed for dialogue-based videos, and further work would need to be done for the program to work with other types of shots, such as action shots.

The program also couldn’t prevent continuity errors, where the actor’s hands or a prop is in a different location in the next clip.

The study, conducted by Stanford University and Adobe Research, is included in the July issue of the ACM Transactions on Graphics Journal.

Please like, share and tweet this article.

Pass it on: Popular Science

GPUs: The Unsung Heroes That Are Accelerating The Development Of AI

Today, a self-driving car powered by AI can meander through a country road at night and find its way to its destination. An AI-powered robot can learn motor skills through trial and error.

And artificial neural networks can analyse images to identify the early signs of disease more accurately than any human. Truly, we are living in an extraordinary time.

Back in 1995, the convergence of low-cost microprocessors (CPUs), a standard operating system (Windows 95), and a new portal to a world of information (Yahoo!) sparked the PC-Internet era.




It brought the power of computing to about a billion people and realized Microsoft’s vision to put ‘a computer on every desk and in every home.’

A decade later, the iPhone put an ‘Internet communications’ device in our pockets. Coupled with the launch of Amazon’s AWS, the Mobile-Cloud era was born.

A world of apps entered our daily lives and some 3 billion people now enjoy the freedom that mobile computing can afford.

The AI computing era that we are living in today is driven by a new computing model, GPU-accelerated deep learning.

The graphics processing unit or GPU was originally invented to drive 3D graphics in video games.

But, because of its ability to handle large amounts of data at the same time, known as parallel processing, this tiny piece of silicon punches well above its weight.

Several years ago, researchers discovered that the GPU’s parallel processing power makes it perfectly suited to crunching the huge amounts of data required to train the artificial neural networks on which deep learning is based.

The ‘big bang’ of AI was ignited and GPU-accelerated deep learning was born.

Since then, the world has woken up to the power of GPU-accelerated deep learning. Baidu, Google, Facebook and Microsoft were the first companies to adopt it for pattern recognition.

Now, AI researchers everywhere across a wide variety of fields have turned to GPU-accelerated deep learning to advance their work.

GPU-accelerated deep learning is being applied to solve challenges in every industry around the world. Self-driving cars will transform the $10 trillion transportation industry.

In healthcare, doctors will use AI to detect disease at the earliest possible moment, to understand the human genome and tackle cancer, or to learn from the massive volume of medical data to recommend the best treatments.

And AI will usher in the 4th industrial revolution — intelligent robotics will drive a new wave of productivity improvements and enable mass consumer customization.AI will touch everyone.

AI will touch everyone.

Please like, share and tweet this article.

Pass it on: Popular Science

Artificial Intelligence Can Stop Electricity Theft And Meter Misreading

Many developing countries like Brazil, India, Bangladesh, UK have to deal with the electricity crisis. There could be many potential reasons behind this but one of the major ones is electricity theft.

According to a research, UK loses more than 400 million pounds every year due to such malpractices. There are many people who try to save money by stealing electricity.

This kind of practice is mainly adopted in rural areas or in some big factories or mills. But soon this problem could be solved by a team of software developers in Brazil.




The developers tested an AI algorithm on several household’s and the results were very promising.

The developers are not only aiming at stopping electricity theft but they also want to get some crucial information like the peak electricity usage, the general trend in the electricity usage over the course of the year.

All this information could be very useful for the local government and can help them tackle the problem of the electricity crisis.

The algorithm can recognize when energy use at a property was suspiciously low. Developers used the past 5 years electricity consumption data to train the AI.

The algorithm will use the past data against the current consumption. This could help better target physical inspections of properties which could be a hectic process.

According to the researchers, the current algorithm detects the malpractice with a 65% accuracy rate.

Now, researchers are aiming to implement this technology in commercial software that will be used in Latin America.

If the algorithm could work as suggested by the researchers in real life then this could help developing countries boost their development rate.

This would also mean that countries could use the extra money in another task like education, healthcare.

Please like, share and tweet this article.

Pass it on: New Scientist

Facial Recognition Software Can Now Identify People Even If Their Face Is Covered!

A facial recognition system can identify someone even if their face is covered up.

The Disguised Face Identification (DFI) system uses an AI network to map facial points and reveal the identity of people.

It could eventually help to pick out criminals, protesters, or anyone who hides their identity by covering themselves with masks, scarves or sunglasses.

The software could also see the end of public anonymity, sparking privacy concerns from one academic, who has labelled it ‘authoritarian‘.




This is very interesting for law enforcement and other organisations that want to capture criminals,” Amarjot Singh, a researcher at the University of Cambridge who worked on DIF.

The potential applications are beyond imagination.

Led by Mr Singh, the international team of scientists published their research on the pre-print server arXiv.

DFI uses a deep-learning AI neural network that the team trained by feeding it images of people using a variety of disguises to cover their faces.

The images had a mixture of complex and simple backgrounds to challenge the AI in a variety of scenarios.

The AI identifies people by measuring the distances and angles between 14 facial points – ten for the eyes, three for the lips, and one for the nose.

It uses these readings to estimate the hidden facial structure, and then compares this with learned images to unveil the person’s true identity.

In early tests, the algorithm correctly identified people whose faces were covered by hats or scarves 56 per cent of the time.

This accuracy dropped to 43 per cent when the faces were also wearing glasses. The work is still in its early stages, and the algorithm needs to be fed more data before it can be brought into the field.

Despite these hurdles, Mr Singh told Inverse: “We’re close to implementing it practically.”

The DFI team have called on other researchers to help develop the technology using their datasets of covered and uncovered faces.

The research, which has not yet been peer reviewed and is still awaiting publication, has sparked controversy after some raised concerns over privacy rights.

Dr. Zeynep Tufekci, a sociologist at the University of North Carolina, posted the research to Twitter, claiming that the AI is ‘authoritarian’.

He tweeted: ‘The authors claim the system works about half the time even when people wear glasses. And this is just the beginning; first paper.

And this is maybe the third or fourth most worrying ML paper I’ve seen recently re: AI and emergent authoritarianism. Historical crossroads.”

Yes, we can & should nitpick this and all papers but the trend is clear. Ever-increasing new capability that will serve authoritarians well.

The DFI team will present their research at the IEEE International Conference on Computer Vision Workshop in Venice, Italy, next month.

Please like, share and tweet this article.

Pass it on: New Scientist

Facebook Is Using AI To Make Language Translation 9 Times Faster

artificial-intelligence

Artificially intelligent systems are only getting better, and they’re likely to appear in our computers and on our phones more and more often over the next few years.

Facebook has been using artificial intelligence and machine learning for various things like its M digital assistant but now the company is turning to AI for another purpose: translation.




Facebook’s research team has published a report on how artificial intelligence is a hefty nine times faster than traditional language translation software.

Not only that, but the researchers have revealed that the source code for the translation software is open-source, so anyone can get their hands on it to verify the results.

The report highlights the use of convolutional neural networks (CNN) as opposed to recurrent neural networks (RNN), which translate sentences one word at a time in a linear order.

fb

The new architecture, however, can take words further down in the sentence into consideration during the translation process, which helps make the translation far more accurate.

This actually marks the first time a CNN has managed to outperform an RNN in language translation, and Facebook now hopes to expand it to to cover more languages.

“Language translation is important to Facebook’s mission of making the world more open and connected, enabling everyone to consume posts or videos in their preferred language all at the highest possible accuracy and speed,” said the company in a blog post.

AI

Convolutional Neural Networks aren’t a totally new technology, but they haven’t really been applied to translation before.As a result of the new tech, Facebook can compute different aspects of a sentence at the same time, and as a

As a result of the new tech, Facebook can compute different aspects of a sentence at the same time, and as a result it can train its systems using a lot less computational power which in turn results in faster translation.

The system is also open source, meaning translation should get better across the web — not just in Facebook’s offerings.

Please like, share and tweet this article.

Pass it on: Popular Science

The Fermi Paradox, Cyborgs, And Artificial Intelligence – My Interview With Isaac Arthur

In this week’s live stream, I’m going to share clips of my interview with Isaac Arthur, which you can find the full version on the Answers With Joe Podcast:
http://answerswithjoe.com/fermi-para…

Support me on Patreon!
http://www.patreon.com/answerswithjoe

Follow me at all my places!
Instagram: https://instagram.com/answerswithjoe
Snapchat: https://www.snapchat.com/add/answersw…
Facebook: http://www.facebook.com/answerswithjoe
Twitter: https://www.twitter.com/answerswithjoe

The Fermi Paradox, Cyborgs, And Artificial Intelligence – My Interview With Isaac Arthur

Isaac Arthur runs the YouTube channel Science and Futurism With Isaac Arthur, where he goes into incredibly deep dives on subjects like megastructures, future space colonies, aliens, and little things like farming black holes (like you do). Here we touch on a few of those topics and do a little shop talk about life as YouTubers.

If you enjoy this episode, check out Isaac’s channel at www.isaacarthur.net