Tag: Google

Let Gmail Finish Your Sentences

Google’s new machine-learning tools for its mail service can save you time and typos — as long as you are comfortable sharing your thoughts with the software.

In theory, the Smart Compose tool can speed up your message composition and cut down on typographical errors.

While “machine learning” means the software (and not a human) is scanning your work-in-progress to get information for the predictive text function, you are sharing information with Google when you use its products.

If you have already updated to the new version of Gmail, you can try out Smart Compose by going to the General tab in Settings and turning on the check box next to enable Experimental Access.

Next, click Save Changes at the bottom of the Settings screen.




When you return to the General tab of the Gmail settings, scroll down to the newly arrived Smart Compose section and confirm that “Writing suggestions on” is enabled.

If you do not care for Google’s assistance after sampling the feature, you can return to the settings and click “Writing suggestions off” to disable Smart Compose.

Once you enable it in the settings, Gmail’s new Smart Compose feature can finish your sentences for you as you type.

The new feature is available only for English composition at the moment, and a disclaimer from Google warns: “Smart Compose is not designed to provide answers and may not always predict factually correct information.”

Google also warns that experimental tools like Smart Compose are still under development and that the company may change or remove the features at any time.

Please like, share and tweet this article.

Pass it on: Popular Science

NASA’s TESS Shares First Science Image in Hunt to Find New Worlds

NASA’s newest planet hunter, the Transiting Exoplanet Survey Satellite (TESS), is now providing valuable data to help scientists discover and study exciting new exoplanets, or planets beyond our solar system.

Part of the data from TESS’ initial science orbit includes a detailed picture of the southern sky taken with all four of the spacecraft’s wide-field cameras.

This “first light” science image captures a wealth of stars and other objects, including systems previously known to have exoplanets.

In a sea of stars brimming with new worlds, TESS is casting a wide net and will haul in a bounty of promising planets for further study,” said Paul Hertz, astrophysics division director at NASA Headquarters in Washington.

This first light science image shows the capabilities of TESS’ cameras, and shows that the mission will realize its incredible potential in our search for another Earth.

TESS acquired the image using all four cameras during a 30-minute period on Tuesday, Aug. 7. The black lines in the image are gaps between the camera detectors.

The images include parts of a dozen constellations, from Capricornus to Pictor, and both the Large and Small Magellanic Clouds, the galaxies nearest to our own.

The small bright dot above the Small Magellanic Cloud is a globular cluster — a spherical collection of hundreds of thousands of stars — called NGC 104, also known as 47 Tucanae because of its location in the southern constellation Tucana, the Toucan.




Two stars, Beta Gruis and R Doradus, are so bright they saturate an entire column of pixels on the detectors of TESS’s second and fourth cameras, creating long spikes of light.

This swath of the sky’s southern hemisphere includes more than a dozen stars we know have transiting planets based on previous studies from ground observatories,” said George Ricker, TESS principal investigator at the Massachusetts Institute of Technology’s (MIT) Kavli Institute for Astrophysics and Space Research in Cambridge.

TESS’s cameras, designed and built by MIT’s Lincoln Laboratory in Lexington, Massachusetts, and the MIT Kavli Institute, monitor large swaths of the sky to look for transits.

Transits occur when a planet passes in front of its star as viewed from the satellite’s perspective, causing a regular dip in the star’s brightness.

TESS will spend two years monitoring 26 such sectors for 27 days each, covering 85 percent of the sky. During its first year of operations, the satellite will study the 13 sectors making up the southern sky.

Then TESS will turn to the 13 sectors of the northern sky to carry out a second year-long survey.

MIT coordinates with Northrop Grumman in Falls Church, Virginia, to schedule science observations. TESS transmits images every 13.7 days, each time it swings closest to Earth.

NASA’s Deep Space Network receives and forwards the data to the TESS Payload Operations Center at MIT for initial evaluation and analysis.

Full data processing and analysis takes place within the Science Processing and Operations Center pipeline at NASA’s Ames Research Center in Silicon Valley, California, which provides calibrated images and refined light curves that scientists can analyze to find promising exoplanet transit candidates.

TESS builds on the legacy of NASA’s Kepler spacecraft, which also uses transits to find exoplanets. TESS’s target stars are 30 to 300 light-years away and about 30 to 100 times brighter than Kepler’s targets, which are 300 to 3,000 light-years away.

The brightness of TESS’ targets make them ideal candidates for follow-up study with spectroscopy, the study of how matter and light interact.

Please like, share and tweet this article.

Pass it on: Popular Science

How Google Will Use High-Flying Balloons To Deliver Internet To The Hinterlands

Project Loon sails through the stratosphere, where there are different wind layers.

Using wind data from the National Oceanic and Atmospheric Administration (NOAA), the balloons are maneuvered by identifying the wind layer with the desired speed and direction and then adjusting altitude to float in that layer.




The Project Loon team prepared for launch in the pre-dawn frost near Lake Tekapo, New Zealand.

Solar panels and insulated electronics packages, prepared for launch. It takes 4 hours for the solar panels to charge the battery during the day, and that power is sufficient to keep all the flight systems working 24 hours a day.

A fully-inflated balloon envelope at Moffett Field, California. The balloons are 15m in diameter when fully inflated, but they do not inflate until they’ve reached float altitude in the stratosphere.

Project Loon team members Paul Acosta and Peter Capraro placed red balloons near the launch site at sunrise. The balloons were used as a rough indicator of wind direction and speed just above ground level.

Please like, share and tweet this article.

Pass it on: Popular Science

Chrome Gets A New Look And Feel For Its 10th Birthday

It’s been ten years since Google first launched Chrome. At the time, Google’s browser was a revelation.

Firefox had gotten slow, Internet Explorer was Internet Explorer and none of the smaller challengers, maybe with the exception of Opera, every got any significant traction.

But here was Google, with a fast browser that was built for the modern web.

Now, ten years later, Google is the incumbent and Chrome is getting challenged both from a technical perspective, thanks to a resurgent Firefox, and by a wave of anti-Google sentiment.

But Google isn’t letting that get in the way of celebrating Chrome’s anniversary.

To mark the day, the company today officially launched its new look for Chrome and previewed what it has in stock for the future of its browser. And it’s not just a new look.

Chrome’s Omnibox and other parts of the browser are getting updates, too.




If you’ve followed along, then the new look doesn’t come as a surprise. As usual, Google started testing this update in its various pre-release channels. If you haven’t, though, you will still instantly recognize Chrome as Chrome.

The new Chrome user interface, which is going live on all the platforms the browser supports, follows Google’s Material Design 2 guidelines.

That means it’s looking a bit sleeker and modern now, with more rounded corners and subtle animations. You’ll also see new icons and a new color palette.

On the feature side, Chrome now offers an updated password manager that can automatically generate (and save) strong passwords for you, as well as improved autofill for those pesky forms that ask for you shipping addresses and credit card info.

What’s maybe more interesting that, though, is an update to the Omnibox (where you type in your URLs and search queries).

The Omnibox can now search the tabs you have currently open and in the near future, it’ll return results from your Google Drive files, too.

Also new are the ability to change the background of your new tab page and create and manage shortcuts on it.

Looking ahead, Google VP of product management Rahul Roy-Chowdhury notes that the team is looking at how to best bring more AI-driven features to Chrome.

That, of course, is exactly what Microsoft is also trying to do with its Edge browser and its integration with Cortana.

I’m not a regular Edge user, but I’ve generally been surprised about the usefulness of that integration, which automatically brings up related information about restaurants, for example. It’ll be interesting to see what Google’s version of this feature will look like.

Please like, share and tweet this article.

Pass it on: Popular Science

Google Tracks You Even If Location History’s Off. Here’s How To Stop It

Google knows a lot about you. A large part of the search giant’s business model is based around advertising – and for this to be successful it needs to know who you are.

But with the right know-how it’s possible to track down what Google knows about you and control what it uses for advertising purposes.

Google doesn’t make a huge song and dance about its in-depth knowledge of its users, but at the same time it doesn’t keep it a secret either. Here’s how to find out what Google knows and take control of your data.

Google saves all your searches

Probably the least surprising of the lot, but Google has all of your search history stored up.

How to delete it

If you’d rather not have a list of ridiculous search queries stored up, then head to Google’s history page, click Menu (the three vertical dots) and then hit Advanced -> All Time -> Delete.

If you want to stop Google tracking your searches for good, head to the activity controls page and toggle tracking off.




Google tracks and records your location

Google’s location history, or timeline page, serves up a Google Map and allows you to select specific dates and times and see where you were.

Its accuracy depends on whether you were signed into your Google account and carrying a phone or tablet at the time.

How to delete it

When you visit the timeline page you can hit the settings cog in the bottom right-hand corner of the screen and select delete all from there.

There’s also the option to pause location history by hitting the big button in the bottom left-hand corner of the screen.

But this one is a little trickier to completely get rid of, because to stop it happening in future you’ll need to opt out of both location tracking and location reporting with your device — whether you’re running Android or iOS.

Delete all your online accounts

If you’ve ever wanted to remove yourself (almost) entirely from the internet, Swedish website Deseat.me uses your Google account to help.

Using Google’s OAuth protocol, which allows third-party users to access your other accounts without finding out your password details, Deseat.me brings up all your online and social media accounts and allows you to delete yourself from them.

How to delete it

Visit Deseat.me and input your Gmail address. It will bring up all the online accounts linked to that email address and allow you to get rid of them.

Please like, share and tweet this article.

Pass it on: Popular Science

Google Unveils Latest OS, Out NOW On Pixel Phones

Android 9 Pie: If you have the right phone, you can get the new Android right now.

Android fans can today download the latest version of Google’s hugely popular mobile OS.

Android Pie, the ninth iteration of the operating system, has been officially unveiled by the search engine giant today.

Android 9 introduces digital wellbeing features, better notifications and promises to extend battery life for devices. And it’s available to download today via an over-the-air update for Google Pixel devices.

In a blog post, Sameer Samat, the VP of Product Management for Android and Google Play, said: “The latest release of Android is here!

“And it comes with a heaping helping of artificial intelligence baked in to make your phone smarter, simpler and more tailored to you. Today we’re officially introducing Android 9 Pie.




We’ve built Android 9 to learn from you—and work better for you—the more you use it.

“From predicting your next task so you can jump right into the action you want to take, to prioritizing battery power for the apps you use most, to helping you disconnect from your phone at the end of the day, Android 9 adapts to your life and the ways you like to use your phone.”

Google described Android Pie as an experience “powered by AI” and said it will adapt to how individuals use their phones and learn user preferences.

Personalised settings include the new Adaptive Battery and Adaptive Brightness modes.

These former setting, as the name suggests, adapts to how users use their phone so apps which aren’t used don’t drain the battery.

While the latter setting automatically adjusts the brightness level to how the user prefers it.

App Actions also predict what users are going to do next based on the “context and displays that action right on your phone”.

Slices, a new feature which is launching later this year, shows relevant information from users’ favourite apps when they need it.

So, for instance, if a user starts typing the name of certain taxi apps it will also show prices for a ride home in the search results screen.

Android Pie is also introducing a new system navigation featuring a single home button.

But one of the biggest additions will be the digital wellbeing features previously announced at Google I/O earlier this year.

Google said: “While much of the time we spend on our phones is useful, many of us wish we could disconnect more easily and free up time for other things.

In fact, over 70 percent of people we talked to in our research said they want more help with this.

“So we’ve been working to add key capabilities right into Android to help people achieve the balance with technology they’re looking for.”

The digital wellbeing features are officially launching later this year, but are available right now for Pixel phones in beta.

Please like, share and tweet this article.

Pass it on: Popular Science

Look Out, Wiki-Geeks. Now Google Trains AI To Write Wikipedia Articles

A team within Google Brain – the web giant’s crack machine-learning research lab – has taught software to generate Wikipedia-style articles by summarizing information on web pages… to varying degrees of success.

As we all know, the internet is a never ending pile of articles, social media posts, memes, joy, hate, and blogs. It’s impossible to read and keep up with everything.

Using AI to tell pictures of dogs and cats apart is cute and all, but if such computers could condense information down into useful snippets, that would be really be handy. It’s not easy, though.

A paper, out last month and just accepted for this year’s International Conference on Learning Representations (ICLR) in April, describes just how difficult text summarization really is.




A few companies have had a crack at it. Salesforce trained a recurrent neural network with reinforcement learning to take information and retell it in a nutshell, and the results weren’t bad.

However, the computer-generated sentences are simple and short; they lacked the creative flair and rhythm of text written by humans. Google Brain’s latest effort is slightly better: the sentences are longer and seem more natural.

Here’s an example for the topic: Wings over Kansas, an aviation website for pilots and hobbyists.

The paragraph on the left is a computer-generated summary of the organization, and the one on the right is taken from the Wikipedia page on the subject.

The model works by taking the top ten web pages of a given subject – excluding the Wikipedia entry – or scraping information from the links in the references section of a Wikipedia article.

Most of the selected pages are used for training, and a few are kept back to develop and test the system.

Also, since it relies on the popularity of the first ten websites on the internet for any particular topic, if those sites aren’t particularly credible, the resulting handiwork probably won’t be very accurate either.

You can’t trust everything you read online, of course.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s Self-Driving Cars Rack Up 3 Million Simulated Miles Every Day

Google uses its giant data centers to simulate 3 million miles of autonomous driving per day, the company has revealed in its monthly autonomous driving status report.

That’s a really long way — like driving more than 500 round trips between NYC and LA — but it actually makes a lot of sense. Americans drove some 2.7 trillion miles in the year 2000 alone and Google needs all the data it can get to teach its cars how to drive safely.

The real advantage comes when Google’s engineers want to tweak the algorithms that control its autonomous cars.




Before it rolls out any code changes to its production cars (22 Lexus SUVs and 33 of its prototype cars, split between fleets in Mountain View and Austin), it “re-drives” its entire driving history of more than 2 million miles to make sure everything works as expected.

Then, once it finally goes live, Google continues to test its code with 10-15,000 miles of autonomous driving each week.

The simulations also allow Google to create new scenarios based on real-world situations — adjusting the speeds of cars at a highway merge to check performance, for example.

Engineers can then design fixes and improvements and check them in the simulator, ensuring that things are as operating as safely as possible before Google’s cars make it out onto real roads.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s AI Sounds Like A Human On The Phone

It came as a total surprise: the most impressive demonstration at Google’s I/O conference yesterday was a phone call to book a haircut. Of course, this was a phone call with a difference.

It wasn’t made by a human, but by the Google Assistant, which did an uncannily good job of asking the right questions, pausing in the right places, and even throwing in the odd “mmhmm” for realism.

The crowd was shocked, but the most impressive thing was that the person on the receiving end of the call didn’t seem to suspect they were talking to an AI.

It’s a huge technological achievement for Google, but it also opens up a Pandora’s box of ethical and social challenges.




For example, does Google have an obligation to tell people they’re talking to a machine? Does technology that mimics humans erode our trust in what we see and hear?

And is this another example of tech privilege, where those in the know can offload boring conversations they don’t want to have to a machine, while those receiving the calls have to deal with some idiot robot?

In other words, it was a typical Google demo: equal parts wonder and worry.

Many experts working in this area agree, although how exactly you would tell someone they’re speaking to an AI is a tricky question.

If the Assistant starts its calls by saying “hello, I’m a robot” then the receiver is likely to hang up. More subtle indicators could mean limiting the realism of the AI’s voice or including a special tone during calls.

Google tells The Verge it hopes a set of social norms will organically evolve that make it clear when the caller is an AI.

Please like, share and tweet this article.

Pass it on: Popular Science

MIT Invented A Tool That Allows Driverless Cars To Navigate Rural Roads Without A Map

Google has spent the last 13 years mapping every corner and crevice of the world.

Car makers haven’t got nearly as long a lead time to perfect the maps that will keep driverless cars from sliding into ditches or hitting misplaced medians if they want to meet their optimistic deadlines.

This is especially true in rural areas where mapping efforts tend to come last due to smaller demand versus cities.

It’s also a more complicated task, due to a lack of infrastructure (i.e. curbs, barriers, and signage) that computers would normally use as reference points.

That’s why a student at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) is developing new technology, called MapLite, that eliminates the need for maps in self-driving car technology altogether.




This could more easily enable a fleet-sharing model that connects carless rural residents and would facilitate intercity trips that run through rural areas.

In a paper posted online on May 7 by CSAIL and project partner Toyota, 30-year-old PhD candidate Teddy Ort—along with co-authors Liam Paull and Daniela Rus—detail how using LIDAR and GPS together can enable self-driving cars to navigate on rural roads without having a detailed map to guide them.

The team was able to drive down a number of unpaved roads in rural Massachusetts and reliably scan the road for curves and obstacles up to 100 feet ahead, according to the paper.

Our method makes no assumptions about road markings and only minimal assumptions about road geometry,” wrote the authors in their paper.

Once the technology is perfected, proponents argue that autonomous cars could also help improve safety on rural roads by reducing the number of impaired and drowsy drivers, eliminating speeding, and detecting and reacting to obstacles even on pitch-black roads.

Ort’s algorithm isn’t commercializable yet; he hasn’t yet tested his algorithm in a wide variety of road conditions and elevations.

Still, if only from an economic perspective it’s clear repeatedly visually capturing millions of miles of roads to train cars how to drive autonomously isn’t going to be winning mapping technology for AVs; it’s just not feasible for most organizations.

Whether it’s Ort’s work, or end-to-end machine learning, or some other technology that wins the navigation race for autonomous vehicles, it’s important to remember that maps are first and foremost a visual tool to aid sighted people in figuring out where to go.

Like humans, a car may not necessarily need to “see” to get to where it’s going—it just needs to sharpen its other senses.

Please like, share and tweet this article.

Pass it on: Popular Science