Tag: Google

Google Tracks You Even If Location History’s Off. Here’s How To Stop It

Google knows a lot about you. A large part of the search giant’s business model is based around advertising – and for this to be successful it needs to know who you are.

But with the right know-how it’s possible to track down what Google knows about you and control what it uses for advertising purposes.

Google doesn’t make a huge song and dance about its in-depth knowledge of its users, but at the same time it doesn’t keep it a secret either. Here’s how to find out what Google knows and take control of your data.

Google saves all your searches

Probably the least surprising of the lot, but Google has all of your search history stored up.

How to delete it

If you’d rather not have a list of ridiculous search queries stored up, then head to Google’s history page, click Menu (the three vertical dots) and then hit Advanced -> All Time -> Delete.

If you want to stop Google tracking your searches for good, head to the activity controls page and toggle tracking off.




Google tracks and records your location

Google’s location history, or timeline page, serves up a Google Map and allows you to select specific dates and times and see where you were.

Its accuracy depends on whether you were signed into your Google account and carrying a phone or tablet at the time.

How to delete it

When you visit the timeline page you can hit the settings cog in the bottom right-hand corner of the screen and select delete all from there.

There’s also the option to pause location history by hitting the big button in the bottom left-hand corner of the screen.

But this one is a little trickier to completely get rid of, because to stop it happening in future you’ll need to opt out of both location tracking and location reporting with your device — whether you’re running Android or iOS.

Delete all your online accounts

If you’ve ever wanted to remove yourself (almost) entirely from the internet, Swedish website Deseat.me uses your Google account to help.

Using Google’s OAuth protocol, which allows third-party users to access your other accounts without finding out your password details, Deseat.me brings up all your online and social media accounts and allows you to delete yourself from them.

How to delete it

Visit Deseat.me and input your Gmail address. It will bring up all the online accounts linked to that email address and allow you to get rid of them.

Please like, share and tweet this article.

Pass it on: Popular Science

Google Unveils Latest OS, Out NOW On Pixel Phones

Android 9 Pie: If you have the right phone, you can get the new Android right now.

Android fans can today download the latest version of Google’s hugely popular mobile OS.

Android Pie, the ninth iteration of the operating system, has been officially unveiled by the search engine giant today.

Android 9 introduces digital wellbeing features, better notifications and promises to extend battery life for devices. And it’s available to download today via an over-the-air update for Google Pixel devices.

In a blog post, Sameer Samat, the VP of Product Management for Android and Google Play, said: “The latest release of Android is here!

“And it comes with a heaping helping of artificial intelligence baked in to make your phone smarter, simpler and more tailored to you. Today we’re officially introducing Android 9 Pie.




We’ve built Android 9 to learn from you—and work better for you—the more you use it.

“From predicting your next task so you can jump right into the action you want to take, to prioritizing battery power for the apps you use most, to helping you disconnect from your phone at the end of the day, Android 9 adapts to your life and the ways you like to use your phone.”

Google described Android Pie as an experience “powered by AI” and said it will adapt to how individuals use their phones and learn user preferences.

Personalised settings include the new Adaptive Battery and Adaptive Brightness modes.

These former setting, as the name suggests, adapts to how users use their phone so apps which aren’t used don’t drain the battery.

While the latter setting automatically adjusts the brightness level to how the user prefers it.

App Actions also predict what users are going to do next based on the “context and displays that action right on your phone”.

Slices, a new feature which is launching later this year, shows relevant information from users’ favourite apps when they need it.

So, for instance, if a user starts typing the name of certain taxi apps it will also show prices for a ride home in the search results screen.

Android Pie is also introducing a new system navigation featuring a single home button.

But one of the biggest additions will be the digital wellbeing features previously announced at Google I/O earlier this year.

Google said: “While much of the time we spend on our phones is useful, many of us wish we could disconnect more easily and free up time for other things.

In fact, over 70 percent of people we talked to in our research said they want more help with this.

“So we’ve been working to add key capabilities right into Android to help people achieve the balance with technology they’re looking for.”

The digital wellbeing features are officially launching later this year, but are available right now for Pixel phones in beta.

Please like, share and tweet this article.

Pass it on: Popular Science

Look Out, Wiki-Geeks. Now Google Trains AI To Write Wikipedia Articles

A team within Google Brain – the web giant’s crack machine-learning research lab – has taught software to generate Wikipedia-style articles by summarizing information on web pages… to varying degrees of success.

As we all know, the internet is a never ending pile of articles, social media posts, memes, joy, hate, and blogs. It’s impossible to read and keep up with everything.

Using AI to tell pictures of dogs and cats apart is cute and all, but if such computers could condense information down into useful snippets, that would be really be handy. It’s not easy, though.

A paper, out last month and just accepted for this year’s International Conference on Learning Representations (ICLR) in April, describes just how difficult text summarization really is.




A few companies have had a crack at it. Salesforce trained a recurrent neural network with reinforcement learning to take information and retell it in a nutshell, and the results weren’t bad.

However, the computer-generated sentences are simple and short; they lacked the creative flair and rhythm of text written by humans. Google Brain’s latest effort is slightly better: the sentences are longer and seem more natural.

Here’s an example for the topic: Wings over Kansas, an aviation website for pilots and hobbyists.

The paragraph on the left is a computer-generated summary of the organization, and the one on the right is taken from the Wikipedia page on the subject.

The model works by taking the top ten web pages of a given subject – excluding the Wikipedia entry – or scraping information from the links in the references section of a Wikipedia article.

Most of the selected pages are used for training, and a few are kept back to develop and test the system.

Also, since it relies on the popularity of the first ten websites on the internet for any particular topic, if those sites aren’t particularly credible, the resulting handiwork probably won’t be very accurate either.

You can’t trust everything you read online, of course.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s Self-Driving Cars Rack Up 3 Million Simulated Miles Every Day

Google uses its giant data centers to simulate 3 million miles of autonomous driving per day, the company has revealed in its monthly autonomous driving status report.

That’s a really long way — like driving more than 500 round trips between NYC and LA — but it actually makes a lot of sense. Americans drove some 2.7 trillion miles in the year 2000 alone and Google needs all the data it can get to teach its cars how to drive safely.

The real advantage comes when Google’s engineers want to tweak the algorithms that control its autonomous cars.




Before it rolls out any code changes to its production cars (22 Lexus SUVs and 33 of its prototype cars, split between fleets in Mountain View and Austin), it “re-drives” its entire driving history of more than 2 million miles to make sure everything works as expected.

Then, once it finally goes live, Google continues to test its code with 10-15,000 miles of autonomous driving each week.

The simulations also allow Google to create new scenarios based on real-world situations — adjusting the speeds of cars at a highway merge to check performance, for example.

Engineers can then design fixes and improvements and check them in the simulator, ensuring that things are as operating as safely as possible before Google’s cars make it out onto real roads.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s AI Sounds Like A Human On The Phone

It came as a total surprise: the most impressive demonstration at Google’s I/O conference yesterday was a phone call to book a haircut. Of course, this was a phone call with a difference.

It wasn’t made by a human, but by the Google Assistant, which did an uncannily good job of asking the right questions, pausing in the right places, and even throwing in the odd “mmhmm” for realism.

The crowd was shocked, but the most impressive thing was that the person on the receiving end of the call didn’t seem to suspect they were talking to an AI.

It’s a huge technological achievement for Google, but it also opens up a Pandora’s box of ethical and social challenges.




For example, does Google have an obligation to tell people they’re talking to a machine? Does technology that mimics humans erode our trust in what we see and hear?

And is this another example of tech privilege, where those in the know can offload boring conversations they don’t want to have to a machine, while those receiving the calls have to deal with some idiot robot?

In other words, it was a typical Google demo: equal parts wonder and worry.

Many experts working in this area agree, although how exactly you would tell someone they’re speaking to an AI is a tricky question.

If the Assistant starts its calls by saying “hello, I’m a robot” then the receiver is likely to hang up. More subtle indicators could mean limiting the realism of the AI’s voice or including a special tone during calls.

Google tells The Verge it hopes a set of social norms will organically evolve that make it clear when the caller is an AI.

Please like, share and tweet this article.

Pass it on: Popular Science

MIT Invented A Tool That Allows Driverless Cars To Navigate Rural Roads Without A Map

Google has spent the last 13 years mapping every corner and crevice of the world.

Car makers haven’t got nearly as long a lead time to perfect the maps that will keep driverless cars from sliding into ditches or hitting misplaced medians if they want to meet their optimistic deadlines.

This is especially true in rural areas where mapping efforts tend to come last due to smaller demand versus cities.

It’s also a more complicated task, due to a lack of infrastructure (i.e. curbs, barriers, and signage) that computers would normally use as reference points.

That’s why a student at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) is developing new technology, called MapLite, that eliminates the need for maps in self-driving car technology altogether.




This could more easily enable a fleet-sharing model that connects carless rural residents and would facilitate intercity trips that run through rural areas.

In a paper posted online on May 7 by CSAIL and project partner Toyota, 30-year-old PhD candidate Teddy Ort—along with co-authors Liam Paull and Daniela Rus—detail how using LIDAR and GPS together can enable self-driving cars to navigate on rural roads without having a detailed map to guide them.

The team was able to drive down a number of unpaved roads in rural Massachusetts and reliably scan the road for curves and obstacles up to 100 feet ahead, according to the paper.

Our method makes no assumptions about road markings and only minimal assumptions about road geometry,” wrote the authors in their paper.

Once the technology is perfected, proponents argue that autonomous cars could also help improve safety on rural roads by reducing the number of impaired and drowsy drivers, eliminating speeding, and detecting and reacting to obstacles even on pitch-black roads.

Ort’s algorithm isn’t commercializable yet; he hasn’t yet tested his algorithm in a wide variety of road conditions and elevations.

Still, if only from an economic perspective it’s clear repeatedly visually capturing millions of miles of roads to train cars how to drive autonomously isn’t going to be winning mapping technology for AVs; it’s just not feasible for most organizations.

Whether it’s Ort’s work, or end-to-end machine learning, or some other technology that wins the navigation race for autonomous vehicles, it’s important to remember that maps are first and foremost a visual tool to aid sighted people in figuring out where to go.

Like humans, a car may not necessarily need to “see” to get to where it’s going—it just needs to sharpen its other senses.

Please like, share and tweet this article.

Pass it on: Popular Science

At Google I/O 2018, Expect All AI All The Time

 

For Google, its annual I/O developer conference isn’t just a place to show off the next major version of Android and get coders excited about building apps.

Though that stuff is a big part of the show, I/O is also a chance for Google to flex its AI muscle and emphasize its massive reach at a time when every major tech company is racing to best each other in artificial intelligence.

And with its emphasis on cloud-based software and apps, I/O is the most important event of the year for Google—as least as long as its hardware efforts are still such a small fraction of its overall business.




Android P Is For … Probably?

Just like every year, Android will be front and center at the 2018 edition of IO. It’s almost a guarantee that we’ll see a new version of Android P, which was first released as a developer preview in March.

far, we know that a lot of the changes from Android O to P have been visual in nature; notifications have been redesigned, and the quick settings menu has gotten a refresh.

There’s also been a lot of chatter around “Material Design 2,” the next iteration of Google’s unifying design language.

Material Design was first unveiled at I/O four years ago, so it’s quite possible we’ll see the next version’s formal.

Newly redesigned Chrome tabs have already been spotted as part of a possible Material Design refresh, along with references to a “touch optimized” Chrome.

Talkin’ About AI

But artificial intelligence, more than Android and Chrome OS, is likely to be the thread that weaves every platform announcement at I/O together.

Whether that’s in consumer-facing apps like Google Assistant and Google Photos, cloud-based machine learning engines like TensorFlow, or even keynote mentions of AI’s impact on jobs.

Speaking of Google Assistant, late last week Google shared some notable updates around the voice-powered digital helper, which now runs on more than 5,000 devices and even allows you to purchase Fandango tickets with your voice.

That’s all well and fun, but one of the most critical aspects of any virtual assistant (in addition to compatibility) is how easy it is to use.

It wouldn’t be entirely surprising to see Google taking steps to make Assistant that much more accessible, whether that’s through software changes, like “slices” of Assistant content that shows up outside of the app, or hardware changes that involve working with OEM partners to offer more quick-launch solutions.

Google’s day-one keynote kicks off today, Tuesday May 8, at 10 am Pacific time.

Please like, share and tweet this article.

Pass it on: Popular Science

Larry Page’s Kitty Hawk Unveils Autonomous Flying Taxis

Autonomous flying taxis just took one big step forward to leaping off the pages of science fiction and into the real world, thanks to Google co-founder Larry Page’s Kitty Hawk.

The billionaire-backed firm has announced that it will begin the regulatory approval process required for launching its autonomous passenger-drone system in New Zealand, after conducting secret testing under the cover of another company called Zephyr Airworks.

The firm’s two-person craft, called Cora, is a 12-rotor plane-drone hybrid that can take off vertically like a drone, but then uses a propeller at the back to fly at up to 110 miles an hour for around 62 miles at a time.




The all-electric Cora flies autonomously up to 914 metres (3,000ft) above ground, has a wingspan of 11 metres, and has been eight years in the making.

Kitty Hawk is personally financed by Page and is being run by former Google autonomous car director Sebastian Thrun. The company is trying to beat Uber and others to launching an autonomous flying taxi service.

The company hopes to have official certification and to have launched a commercial service within three years, which will make it the first to do so.

But its achievement will also propel New Zealand to the front of the pack as the first country to devise a certification process.

The country’s aviation authority is well respected in the industry, and is seen as pioneering.

Kitty Hawk is already working on an app and technology to allow customers to hail flying taxis as they would an Uber, but whether Page, Thrun and their team will actually be able to deliver within three years remains to be seen.

Many companies have promised great leaps but failed to deliver meaningful progress towards a Jetsons-like future, from Uber’s Elevate to China’s Ehang.

Even if Kitty Hawk hits all its projected milestones and launches commercially, there’s then the matter of persuading people to actually use it.

Please like, share and tweet this article.

Pass it on: Popular Science

Google Clips: A Smart Camera That Doesn’t Make The Grade

Picture this: you’re hanging out with your kids or pets and they spontaneously do something interesting or cute that you want to capture and preserve.

But by the time you’ve gotten your phone out and its camera opened, the moment has passed and you’ve missed your opportunity to capture it.

That’s the main problem that Google is trying to solve with its new Clips camera, a $249 device available starting today that uses artificial intelligence to automatically capture important moments in your life.

Google says it’s for all of the in-between moments you might miss when your phone or camera isn’t in your hand.




It is meant to capture your toddler’s silly dance or your cat getting lost in an Amazon box without requiring you to take the picture.

The other issue Google is trying to solve with Clips is letting you spend more time interacting with your kids directly, without having a phone or camera separating you, while still getting some photos.

That’s an appealing pitch to both parents and pet owners alike, and if the Clips camera system is able to accomplish its goal, it could be a must-have gadget for them.

But if it fails, then it’s just another gadget that promises to make life easier, but requires more work and maintenance than it’s worth.

The problem for Google Clips is it just doesn’t work that well.

Before we get into how well Clips actually works, I need to discuss what it is and what exactly it’s doing because it really is unlike any camera you’ve used before.

At its core, the Clips camera is a hands-free automatic point-and-shoot camera that’s sort of like a GoPro, but considerably smaller and flatter.

It has a cute, unassuming appearance that is instantly recognizable as a camera, or at least an icon of a camera app on your phone.

Google, aware of how a “camera that automatically takes pictures when it sees you” is likely to be perceived, is clearly trying to make the Clips appear friendly, with its white-and-teal color scheme and obvious camera-like styling.

But of those that I showed the camera to while explaining what it’s supposed to do, “it’s creepy” has been a common reaction.

One thing that I’ve discovered is that people know right away it’s a camera and react to it just like other any camera.

That might mean avoiding its view when they see it, or, like in the case of my three-year-old, walking up to it and smiling or picking it up.

That has made it tough to capture candids, since, for the Clips to really work, it needs to be close to its subject.

Maybe over time, your family would learn to ignore it and those candid shots could happen, but in my couple weeks of testing, my family hasn’t acclimated to its presence.

The Clips’ camera sensor can capture 12-megapixel images at 15 frames per second, which it then saves to its 16GB of internal storage that’s good for about 1,400 seven-second clips.

The battery lasts roughly three hours between charges.

Included with the camera is a silicone case that makes it easy to prop up almost anywhere or, yes, clip it to things. It’s not designed to be a body camera or to be worn.

Instead, it’s meant to be placed in positions where it can capture you in the frame as well.

There are other accessories you can buy, like a case that lets you mount the Clips camera to a tripod for more positioning options, but otherwise, using the Clips camera is as simple as turning it on and putting it where you want it.

Once the camera has captured a bunch of clips, you use the app to browse through them on your phone, edit them down to shorter versions, grab still images, or just save the whole thing to your phone’s storage for sharing and editing later.

The Clips app is supposed to learn based on which clips you save and deem “important” and then prioritize capturing similar clips in the future.

You can also hit a toggle to view “suggested” clips for saving, which is basically what the app thinks you’ll like out of the clips it has captured.

Google’s definitely onto something here. The idea is an admirable first step toward a new kind of camera that doesn’t get between me and my kids. But first steps are tricky — ask any toddler!

Usually, after you take your first step, you fall down. To stand back up, Google Clips needs to justify its price, the hassle of setting it up, and the fiddling between it and my phone.

It needs to reassure me that by trusting it and putting my phone away, I won’t miss anything important, and I won’t be burdened by having to deal with a lot of banal captures.

Otherwise, it’s just another redundant gadget that I have to invest too much time and effort into managing to get too little in return.

That’s a lot to ask of a tiny little camera, and this first version doesn’t quite get there. To live up to it all, Clips needs to be both a better camera and a smarter one.

Please like, share and tweet this article.

Pass it on: Popular Science

Megapixels Don’t Matter Anymore. Here’s Why More Isn’t Always Better.

For years, smartphone makers have been caught up in a megapixel spec race to prove that their camera is better than the next guy’s.

But we’ve finally come to a point where even the lower-end camera phones are packing more megapixels than they need, so it’s getting harder to differentiate camera hardware.

Without that megapixel crutch to fall back on, how are we supposed to know which smartphone has the best camera?

Well thankfully, there are several other important specs to look for in a camera, and it’s just a matter of learning which ones matter the most to you.




Why Megapixels Don’t Matter Anymore

The term “megapixel” actually means “one million pixels,” so a 12-megapixel camera captures images that are comprised of 12,000,000 tiny little dots.

A larger number of dots (pixels) in an image means that the image has more definition and clarity, which is also referred to as having a higher resolution.

This might lead you to believe that a camera with more megapixels will take better pictures than a camera with fewer megapixels, but that’s not always the case.

The trouble is, we’ve reached a point where all smartphone cameras have more than enough megapixels.

For instance, a 1080p HD TV has a resolution of 2.1 megapixels, and even the highest-end 4K displays top out at 8.3 megapixels.

Considering that nearly every smartphone camera has a double-digit megapixel rating these days, your photos will be in a higher resolution than most screens can even display.

Simply put, you won’t be able to see any difference in resolution between pictures taken by two different smartphone cameras, because most screens you’ll be viewing them on aren’t capable of displaying that many megapixels.

Really, anything greater than 8.3 megapixels is only helpful for cropping. In other words, if your phone takes 12-megapixel photos, you can crop them by roughly 50%, and the resolution will still be just as high as a 4K TV.

Pixel Size Is the Real Difference Maker

The hot new number to judge your phone’s camera by is the pixel size. You’ll see this spec listed as a micron value, which is a number followed by the symbol “µm.”

A phone with a 1.4µm pixel size will almost always capture better pictures than one with a 1.0µm pixel size, thanks to physics.

If you zoomed in far enough on one of your photos, you could see the individual pixels, right? Well, each of those tiny little dots was captured by microscopic light sensors inside your smartphone’s camera.

These light sensors are referred to as “pixels” because, well, they each capture a pixel’s worth of light. So if you have a 12-megapixel camera, the actual camera sensor has twelve million of these light-sensing pixels.

Each of these pixels measures light particles called photons to determine the color and brightness of the corresponding pixel in your finished photo.

When a bright blue photon hits one of your camera’s light sensors, it tells your phone to make a dot with bright blue coloring.

Put twelve million of these dots together in their various brightness and colors, then you’ll end up with a picture.

A Little Aperture Goes a Long Way

The next key spec to look for is the camera’s aperture, which is represented as f divided by a number (f/2.0, for example).

Because of the “f divided by” setup, this is one of those rare specs where a smaller number is always better than a larger one.

To help you understand aperture, let’s go back to pixel size for a second.

If larger pixels mean your camera can collect more light particles to create more accurate photos, then imagine pixels as a bucket, and photons as falling rain.

The bigger the opening of the bucket (pixel), the more rain (photons) you can collect, right?

Well aperture is like a funnel for that bucket. The bottom of this imaginary funnel has the same diameter as the pixel bucket, but the top is wider—which means you can collect even more photons.

In this analogy, a wider aperture gives the photon bucket a wider opening, so it focuses more light onto your camera’s light-sensing pixels.

Image Stabilization: EIS vs. OIS

With most spec sheets, you’ll see a camera’s image stabilization technology listed as either EIS or OIS. These stand for Electronic Image Stabilization and Optical Image Stabilization, respectively.

OIS is easier to explain, so let’s start with that one. Simply put, this technology makes it to where your camera sensor physically moves to compensate for any shaking while you’re holding your phone.

If you’re walking while you’re recording a video, for instance, each of your steps would normally shake the camera—but OIS ensures that the camera sensor remains relatively steady even while the rest of your phone shakes around it.

In general, though, it’s always better to have a camera with OIS.

For one, the cropping and stretching can reduce quality and create a “Jello effect” in videos, but in addition to that, EIS has little to no effect on reducing blur in still photos.

Now that you’ve got a better understanding about camera specs, have you decided which smartphone you’re going to buy next?

If you’re still undecided, you can use our smartphone-buyer’s flowchart at the following link, and if you have any further questions, just fire away in the comment section below.

Please like, share and tweet this article.

Pass it on: Popular Science