Tag: Google

Google Finally Confirms That Dark Mode Is A Huge Help For Battery Life On Android

We’ve known for a long time that dark mode / night mode apps can prolong battery life on smartphones with OLED screens. It’s true on Android, and it’s true with the iPhone.

This is because the individual pixels have to do less work on dark areas of the screen, and they use practically no juice at all when displaying true black.

As SlashGear picked up on, Google reiterated this during its Android Dev Summit this week, showing several slides that compare the power draw of several different colors.

You can see that white far and away uses up the most power. This led Google to acknowledge that the prominence of white across its own apps and within Android’s style guidelines is, well, less than ideal.

It’s everywhere, and that’s not changing with Google’s revamped Material Design.




Fortunately, the company seems to recognize the value of dark mode. YouTube and Android Messages already have it, and Google is also bringing the feature to its Phone app and testing it in the mobile Google Feed.

Android can also be set to a dark theme for the quick settings pulldown and app drawer, but Google hasn’t yet gone as far as adding a system-wide night mode. (That’s something Samsung plans to do with its new One UI.)

As an example of dark mode coming to your battery’s rescue, just look at the huge difference below, where the Pixel is set to 100 percent brightness.

The power savings of dark mode are evident. I tend to stick to the traditional look of these apps most often, as I find white text on black a little harsh on my eyes, but it’s nice to see that Google is recognizing the value of easing up on all the white.

Make it a choice wherever it makes sense.

Please like, share and tweet this article.

Pass it on: Popular Science

According To The Doctors, Google Contact Lens To Monitor Diabetes Holds Promise

Google has come up with another wearable eye device, this time a lens made out of soft contact material that might help diabetes patients keep track of their glucose levels.

The company revealed a functional prototype Jan. 16 that doctors are saying has the potential to replace not only the current continuous glucose monitors implanted under the skin, but perhaps one day even the painful finger-pricking blood tests.

The so-called smart lens, a tiny wireless computer chip that contains a glucose sensor and an antenna thinner than a strand of hair, is implanted between soft contact lens material, which is worn on the surface of the eye.

The lens is powered by tapping into radio waves in the air and is designed to send data to a smart phone or other device.

Glucose levels change frequently with normal activity like exercising or eating or even sweating. Sudden spikes or precipitous drops are dangerous and not uncommon, requiring round-the-clock monitoring,” say Google [x] co-founders Brian Otis and Babak Parviz.

The gold standard for testing the presence of glucose is doing a quick blood test. But traces of glucose can also be found in bodily fluids under the skin and in the eyes.




But because changes in glucose levels can be so abrupt, there may be a lag time in detection in the eyes, according to endocrinologists.

The company said these are “early days” in its research. More would need to be known about the correlation between tear and blood glucose and what the lag time is in detection, as well as how the environment, such as heat and wind, can affect tears.

Dr. Gerald Bernstein, director of the Friedman Diabetes Institute at Beth Israel Medical Center in New York City, said the idea is “terrific, if it can be done.”

The key is whether the device measures just the tears on the outside of the eye or the aqueous humor, the thin, watery fluid that fills the space between the cornea and the iris.

Aqueous fluid is “a more predictable reflection of the blood sugar,” he said. “And don’t forget, this is bodily fluid and not exactly what is in the blood.

The concept is not new, according to Bernstein. Several years ago, he consulted with an Albuquerque, N.M., company to measure glucose in the aqueous humor.

Those scientists used a low-level laser that could safely send light through the fluid in the front chamber of the eye to record the blood sugar.

It was used for patients undergoing surgery so doctors could continuously read blood sugar levels.

Please like, share and tweet this article.

Pass it on: Popular Science

Let Gmail Finish Your Sentences

Google’s new machine-learning tools for its mail service can save you time and typos — as long as you are comfortable sharing your thoughts with the software.

In theory, the Smart Compose tool can speed up your message composition and cut down on typographical errors.

While “machine learning” means the software (and not a human) is scanning your work-in-progress to get information for the predictive text function, you are sharing information with Google when you use its products.

If you have already updated to the new version of Gmail, you can try out Smart Compose by going to the General tab in Settings and turning on the check box next to enable Experimental Access.

Next, click Save Changes at the bottom of the Settings screen.




When you return to the General tab of the Gmail settings, scroll down to the newly arrived Smart Compose section and confirm that “Writing suggestions on” is enabled.

If you do not care for Google’s assistance after sampling the feature, you can return to the settings and click “Writing suggestions off” to disable Smart Compose.

Once you enable it in the settings, Gmail’s new Smart Compose feature can finish your sentences for you as you type.

The new feature is available only for English composition at the moment, and a disclaimer from Google warns: “Smart Compose is not designed to provide answers and may not always predict factually correct information.”

Google also warns that experimental tools like Smart Compose are still under development and that the company may change or remove the features at any time.

Please like, share and tweet this article.

Pass it on: Popular Science

NASA’s TESS Shares First Science Image in Hunt to Find New Worlds

NASA’s newest planet hunter, the Transiting Exoplanet Survey Satellite (TESS), is now providing valuable data to help scientists discover and study exciting new exoplanets, or planets beyond our solar system.

Part of the data from TESS’ initial science orbit includes a detailed picture of the southern sky taken with all four of the spacecraft’s wide-field cameras.

This “first light” science image captures a wealth of stars and other objects, including systems previously known to have exoplanets.

In a sea of stars brimming with new worlds, TESS is casting a wide net and will haul in a bounty of promising planets for further study,” said Paul Hertz, astrophysics division director at NASA Headquarters in Washington.

This first light science image shows the capabilities of TESS’ cameras, and shows that the mission will realize its incredible potential in our search for another Earth.

TESS acquired the image using all four cameras during a 30-minute period on Tuesday, Aug. 7. The black lines in the image are gaps between the camera detectors.

The images include parts of a dozen constellations, from Capricornus to Pictor, and both the Large and Small Magellanic Clouds, the galaxies nearest to our own.

The small bright dot above the Small Magellanic Cloud is a globular cluster — a spherical collection of hundreds of thousands of stars — called NGC 104, also known as 47 Tucanae because of its location in the southern constellation Tucana, the Toucan.




Two stars, Beta Gruis and R Doradus, are so bright they saturate an entire column of pixels on the detectors of TESS’s second and fourth cameras, creating long spikes of light.

This swath of the sky’s southern hemisphere includes more than a dozen stars we know have transiting planets based on previous studies from ground observatories,” said George Ricker, TESS principal investigator at the Massachusetts Institute of Technology’s (MIT) Kavli Institute for Astrophysics and Space Research in Cambridge.

TESS’s cameras, designed and built by MIT’s Lincoln Laboratory in Lexington, Massachusetts, and the MIT Kavli Institute, monitor large swaths of the sky to look for transits.

Transits occur when a planet passes in front of its star as viewed from the satellite’s perspective, causing a regular dip in the star’s brightness.

TESS will spend two years monitoring 26 such sectors for 27 days each, covering 85 percent of the sky. During its first year of operations, the satellite will study the 13 sectors making up the southern sky.

Then TESS will turn to the 13 sectors of the northern sky to carry out a second year-long survey.

MIT coordinates with Northrop Grumman in Falls Church, Virginia, to schedule science observations. TESS transmits images every 13.7 days, each time it swings closest to Earth.

NASA’s Deep Space Network receives and forwards the data to the TESS Payload Operations Center at MIT for initial evaluation and analysis.

Full data processing and analysis takes place within the Science Processing and Operations Center pipeline at NASA’s Ames Research Center in Silicon Valley, California, which provides calibrated images and refined light curves that scientists can analyze to find promising exoplanet transit candidates.

TESS builds on the legacy of NASA’s Kepler spacecraft, which also uses transits to find exoplanets. TESS’s target stars are 30 to 300 light-years away and about 30 to 100 times brighter than Kepler’s targets, which are 300 to 3,000 light-years away.

The brightness of TESS’ targets make them ideal candidates for follow-up study with spectroscopy, the study of how matter and light interact.

Please like, share and tweet this article.

Pass it on: Popular Science

How Google Will Use High-Flying Balloons To Deliver Internet To The Hinterlands

Project Loon sails through the stratosphere, where there are different wind layers.

Using wind data from the National Oceanic and Atmospheric Administration (NOAA), the balloons are maneuvered by identifying the wind layer with the desired speed and direction and then adjusting altitude to float in that layer.




The Project Loon team prepared for launch in the pre-dawn frost near Lake Tekapo, New Zealand.

Solar panels and insulated electronics packages, prepared for launch. It takes 4 hours for the solar panels to charge the battery during the day, and that power is sufficient to keep all the flight systems working 24 hours a day.

A fully-inflated balloon envelope at Moffett Field, California. The balloons are 15m in diameter when fully inflated, but they do not inflate until they’ve reached float altitude in the stratosphere.

Project Loon team members Paul Acosta and Peter Capraro placed red balloons near the launch site at sunrise. The balloons were used as a rough indicator of wind direction and speed just above ground level.

Please like, share and tweet this article.

Pass it on: Popular Science

Chrome Gets A New Look And Feel For Its 10th Birthday

It’s been ten years since Google first launched Chrome. At the time, Google’s browser was a revelation.

Firefox had gotten slow, Internet Explorer was Internet Explorer and none of the smaller challengers, maybe with the exception of Opera, every got any significant traction.

But here was Google, with a fast browser that was built for the modern web.

Now, ten years later, Google is the incumbent and Chrome is getting challenged both from a technical perspective, thanks to a resurgent Firefox, and by a wave of anti-Google sentiment.

But Google isn’t letting that get in the way of celebrating Chrome’s anniversary.

To mark the day, the company today officially launched its new look for Chrome and previewed what it has in stock for the future of its browser. And it’s not just a new look.

Chrome’s Omnibox and other parts of the browser are getting updates, too.




If you’ve followed along, then the new look doesn’t come as a surprise. As usual, Google started testing this update in its various pre-release channels. If you haven’t, though, you will still instantly recognize Chrome as Chrome.

The new Chrome user interface, which is going live on all the platforms the browser supports, follows Google’s Material Design 2 guidelines.

That means it’s looking a bit sleeker and modern now, with more rounded corners and subtle animations. You’ll also see new icons and a new color palette.

On the feature side, Chrome now offers an updated password manager that can automatically generate (and save) strong passwords for you, as well as improved autofill for those pesky forms that ask for you shipping addresses and credit card info.

What’s maybe more interesting that, though, is an update to the Omnibox (where you type in your URLs and search queries).

The Omnibox can now search the tabs you have currently open and in the near future, it’ll return results from your Google Drive files, too.

Also new are the ability to change the background of your new tab page and create and manage shortcuts on it.

Looking ahead, Google VP of product management Rahul Roy-Chowdhury notes that the team is looking at how to best bring more AI-driven features to Chrome.

That, of course, is exactly what Microsoft is also trying to do with its Edge browser and its integration with Cortana.

I’m not a regular Edge user, but I’ve generally been surprised about the usefulness of that integration, which automatically brings up related information about restaurants, for example. It’ll be interesting to see what Google’s version of this feature will look like.

Please like, share and tweet this article.

Pass it on: Popular Science

Google Tracks You Even If Location History’s Off. Here’s How To Stop It

Google knows a lot about you. A large part of the search giant’s business model is based around advertising – and for this to be successful it needs to know who you are.

But with the right know-how it’s possible to track down what Google knows about you and control what it uses for advertising purposes.

Google doesn’t make a huge song and dance about its in-depth knowledge of its users, but at the same time it doesn’t keep it a secret either. Here’s how to find out what Google knows and take control of your data.

Google saves all your searches

Probably the least surprising of the lot, but Google has all of your search history stored up.

How to delete it

If you’d rather not have a list of ridiculous search queries stored up, then head to Google’s history page, click Menu (the three vertical dots) and then hit Advanced -> All Time -> Delete.

If you want to stop Google tracking your searches for good, head to the activity controls page and toggle tracking off.




Google tracks and records your location

Google’s location history, or timeline page, serves up a Google Map and allows you to select specific dates and times and see where you were.

Its accuracy depends on whether you were signed into your Google account and carrying a phone or tablet at the time.

How to delete it

When you visit the timeline page you can hit the settings cog in the bottom right-hand corner of the screen and select delete all from there.

There’s also the option to pause location history by hitting the big button in the bottom left-hand corner of the screen.

But this one is a little trickier to completely get rid of, because to stop it happening in future you’ll need to opt out of both location tracking and location reporting with your device — whether you’re running Android or iOS.

Delete all your online accounts

If you’ve ever wanted to remove yourself (almost) entirely from the internet, Swedish website Deseat.me uses your Google account to help.

Using Google’s OAuth protocol, which allows third-party users to access your other accounts without finding out your password details, Deseat.me brings up all your online and social media accounts and allows you to delete yourself from them.

How to delete it

Visit Deseat.me and input your Gmail address. It will bring up all the online accounts linked to that email address and allow you to get rid of them.

Please like, share and tweet this article.

Pass it on: Popular Science

Google Unveils Latest OS, Out NOW On Pixel Phones

Android 9 Pie: If you have the right phone, you can get the new Android right now.

Android fans can today download the latest version of Google’s hugely popular mobile OS.

Android Pie, the ninth iteration of the operating system, has been officially unveiled by the search engine giant today.

Android 9 introduces digital wellbeing features, better notifications and promises to extend battery life for devices. And it’s available to download today via an over-the-air update for Google Pixel devices.

In a blog post, Sameer Samat, the VP of Product Management for Android and Google Play, said: “The latest release of Android is here!

“And it comes with a heaping helping of artificial intelligence baked in to make your phone smarter, simpler and more tailored to you. Today we’re officially introducing Android 9 Pie.




We’ve built Android 9 to learn from you—and work better for you—the more you use it.

“From predicting your next task so you can jump right into the action you want to take, to prioritizing battery power for the apps you use most, to helping you disconnect from your phone at the end of the day, Android 9 adapts to your life and the ways you like to use your phone.”

Google described Android Pie as an experience “powered by AI” and said it will adapt to how individuals use their phones and learn user preferences.

Personalised settings include the new Adaptive Battery and Adaptive Brightness modes.

These former setting, as the name suggests, adapts to how users use their phone so apps which aren’t used don’t drain the battery.

While the latter setting automatically adjusts the brightness level to how the user prefers it.

App Actions also predict what users are going to do next based on the “context and displays that action right on your phone”.

Slices, a new feature which is launching later this year, shows relevant information from users’ favourite apps when they need it.

So, for instance, if a user starts typing the name of certain taxi apps it will also show prices for a ride home in the search results screen.

Android Pie is also introducing a new system navigation featuring a single home button.

But one of the biggest additions will be the digital wellbeing features previously announced at Google I/O earlier this year.

Google said: “While much of the time we spend on our phones is useful, many of us wish we could disconnect more easily and free up time for other things.

In fact, over 70 percent of people we talked to in our research said they want more help with this.

“So we’ve been working to add key capabilities right into Android to help people achieve the balance with technology they’re looking for.”

The digital wellbeing features are officially launching later this year, but are available right now for Pixel phones in beta.

Please like, share and tweet this article.

Pass it on: Popular Science

Look Out, Wiki-Geeks. Now Google Trains AI To Write Wikipedia Articles

A team within Google Brain – the web giant’s crack machine-learning research lab – has taught software to generate Wikipedia-style articles by summarizing information on web pages… to varying degrees of success.

As we all know, the internet is a never ending pile of articles, social media posts, memes, joy, hate, and blogs. It’s impossible to read and keep up with everything.

Using AI to tell pictures of dogs and cats apart is cute and all, but if such computers could condense information down into useful snippets, that would be really be handy. It’s not easy, though.

A paper, out last month and just accepted for this year’s International Conference on Learning Representations (ICLR) in April, describes just how difficult text summarization really is.




A few companies have had a crack at it. Salesforce trained a recurrent neural network with reinforcement learning to take information and retell it in a nutshell, and the results weren’t bad.

However, the computer-generated sentences are simple and short; they lacked the creative flair and rhythm of text written by humans. Google Brain’s latest effort is slightly better: the sentences are longer and seem more natural.

Here’s an example for the topic: Wings over Kansas, an aviation website for pilots and hobbyists.

The paragraph on the left is a computer-generated summary of the organization, and the one on the right is taken from the Wikipedia page on the subject.

The model works by taking the top ten web pages of a given subject – excluding the Wikipedia entry – or scraping information from the links in the references section of a Wikipedia article.

Most of the selected pages are used for training, and a few are kept back to develop and test the system.

Also, since it relies on the popularity of the first ten websites on the internet for any particular topic, if those sites aren’t particularly credible, the resulting handiwork probably won’t be very accurate either.

You can’t trust everything you read online, of course.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s Self-Driving Cars Rack Up 3 Million Simulated Miles Every Day

Google uses its giant data centers to simulate 3 million miles of autonomous driving per day, the company has revealed in its monthly autonomous driving status report.

That’s a really long way — like driving more than 500 round trips between NYC and LA — but it actually makes a lot of sense. Americans drove some 2.7 trillion miles in the year 2000 alone and Google needs all the data it can get to teach its cars how to drive safely.

The real advantage comes when Google’s engineers want to tweak the algorithms that control its autonomous cars.




Before it rolls out any code changes to its production cars (22 Lexus SUVs and 33 of its prototype cars, split between fleets in Mountain View and Austin), it “re-drives” its entire driving history of more than 2 million miles to make sure everything works as expected.

Then, once it finally goes live, Google continues to test its code with 10-15,000 miles of autonomous driving each week.

The simulations also allow Google to create new scenarios based on real-world situations — adjusting the speeds of cars at a highway merge to check performance, for example.

Engineers can then design fixes and improvements and check them in the simulator, ensuring that things are as operating as safely as possible before Google’s cars make it out onto real roads.

Please like, share and tweet this article.

Pass it on: Popular Science