Tag: Google

How HTTPS Website Security Is Making the Internet Safer From Hackers

You may have noticed in your travels around the internet that your browser’s address bar occasionally turns green and displays a padlock—that’s HTTPS, or a secure version of the Hypertext Transfer Protocol, swinging into action.

This little green padlock is becoming vitally important as more and more of your online security is eroded.

Just because your ISP can now see what sites you browse on doesn’t mean they have to know all the content your consuming.

Below is the rundown on HTTPS, so you can better understand this first, and easiest line of defense against potential snoopers and hackers.

HTTP or the Hypertext Transfer Protocol is the universally-agreed-upon coding structure that the web is built on.




Hypertext is the basic idea of having plain text with embedded links you can click on; the Transfer Protocol is a standard way of communicating it.

When you see HTTP in your browser you know you’re connecting to a standard, run-of-the-mill website, as opposed to a different kind of connection, like FTP (File Transfer Protocol), which is often used by file storage databases.

The protocol before a web address tells your browser what to expect and how to display the information it finds. So what about the extra S in HTTPS?

The S is simple. It means Secure.

It originally stood for Secure Sockets Layer (SSL) which is now part of a broader security protocol called Transport Layer Security (TLS).

TLS is part of the two layers that make up HTTPS, the other being traditional HTTP.

TLS works to verify that the website you’ve loaded up is actually the website you wanted to load up—that the Facebook page you see before you really is Facebook and not a site pretending to be Facebook.

On top of that, TLS encrypt all of the data you’re transmitting (like apps such as Signal or WhatsApp do).

Anyone who happens across the traffic coming to or from your computer when it’s connected to an HTTPS site can’t make sense of it—they can’t read it or alter its contents.

So if someone wants to catch the username and password you just sent to Google, or wants to throw up a webpage that looks like Instagram but isn’t, or wants to jump in on your email conversations and change what’s being said, HTTPS helps to stop them.

It’s obvious why login details, credit card information, and the like is better encrypted rather than sent in plain text—it makes it much harder to steal.

In 2017, if you come across a shopping or banking site, or any webpage that asks you to log in, it should have HTTPS enabled; if not, take your business elsewhere.

Check the details of the app listing and contact the developer directly if you’re worried about whether your connection to the web really is secure inside a mobile app.

So if HTTPS is so great, why not use it for everything? That’s definitely a plan.

There is now a big push to get HTTPS used as standard, but because it previously required extra processing power and bandwidth, it hasn’t always made sense for pages where you’re not entering or accessing any sensitive information.

The latest HTTPS iterations remove most of these drawbacks, so we should see it deployed more widely in the future—although converting old, large sites can take a lot of time.

If you want to stay as secure as possible, the HTTPS Everywhere extension for Chrome and Firefox makes sure you’re always connected to the HTTPS version of a site, where one has been made available, and fixes a few security bugs in the HTTPS approach at the same time.

It’s well worth installing and using, particularly on public Wi-Fi, where unwelcome eavesdroppers are more likely to be trying to listen in.

HTTPS isn’t 100 percent unbeatable—no security measure is—but it makes it much more difficult for hackers to spy on and manipulate sensitive data as it travels between your computer and the web at large, as well as adding an extra check to verify the identity of the sites you visit.

It’s a vital part of staying safe on the web.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s First Mobile Chip Is An Image Processor Hidden In The Pixel 2

One thing that Google left unannounced during its Pixel 2 launch event on October 4th is being revealed today: it’s called the Pixel Visual Core, and it is Google’s first custom system-on-a-chip (SOC) for consumer products.

You can think of it as a very scaled-down and simplified, purpose-built version of Qualcomm’s Snapdragon, Samsung’s Exynos, or Apple’s A series chips. The purpose in this case?

Accelerating the HDR+ camera magic that makes Pixel photos so uniquely superior to everything else on the mobile market.

Google plans to use the Pixel Visual Core to make image processing on its smartphones much smoother and faster, but not only that, the Mountain View also plans to use it to open up HDR+ to third-party camera apps.




The coolest aspects of the Pixel Visual Core might be that it’s already in Google’s devices. The Pixel 2 and Pixel 2 XL both have it built in, but laying dormant until activation at some point “over the coming months.”

It’s highly likely that Google didn’t have time to finish optimizing the implementation of its brand-new hardware, so instead of yanking it out of the new Pixels, it decided to ship the phones as they are and then flip the Visual Core activation switch when the software becomes ready.

In that way, it’s a rather delightful bonus for new Pixel buyers.

The Pixel 2 devices are already much faster at processing HDR shots than the original Pixel, and when the Pixel Visual Core is live, they’ll be faster and more efficient.

Looking at the layout of Google’s chip, which is dubbed an Image Processing Unit (IPU) for obvious reasons, we see something sort of resembling a regular 8-core SOC.

Technically, there’s a ninth core, in the shape of the power-efficient ARM Cortex-A53 CPU in the top left corner.

But the important thing is that each of those eight processors that Google designed has been tailored to handle HDR+ duties, resulting in HDR+ performance that is “5x faster and [uses] less than 1/10th the energy” of the current implementation, according to Google.

This is the sort of advantage a company can gain when it shifts to purpose-specific hardware rather than general-purpose processing.

Google says that it will enable Pixel Visual Core as a developer option in its preview of Android Oreo 8.1, before updating the Android Camera API to allow access to HDR+ for third-party camera devs.

Obviously, all of this tech is limited strictly to the Pixel 2 generation, ruling out current Pixel owners and other Android users.

As much as Google likes to talk about enriching the entire Android ecosystem, the company is evidently cognizant of how much of a unique selling point its Pixel camera system is, and it’s working hard to develop and expand the lead that it has.

As a final note, Google’s announcement today says that HDR+ is only the first application to run on the programmable Pixel Visual Core, and with time we should expect to see more imaging and machine learning enhancements being added to the Pixel 2.

Pleas like, share and tweet this article.

Pass it on: Popular Science

Top 5 Ways To Find Better Answers Online (That Aren’t Google)

You can Google just about anything, but it’s not always your best resource for finding the exact answer to what you want. Here’s a look at our top ten tools for finding better answers online.

You can’t ask Wolfram Alpha anything, but you can ask it for information you can’t find anywhere else. It’s full of information and calculations that no other search engine can provide.

For example, you can use Wolfram Alpha to calculate activity-specific calorie burn, analyze illness symptoms and generic drug options, and make sense of your confusing family relationships.

For more ideas, check out our full Wolfram Alpha coverage, or just play around with it yourself.

 

4. Wikipedia

You might be thinking, “duh.” For that reason it’s pretty much impossible to keep Wikipedia off of a Top 10 list about finding better answers online.

Wikipedia contains an enormous wealth of information and it ought to be your primary destination when you want quick information on a given topic.

While you can’t ask it a specific question, if you know what you’re looking for you’re bound to find it on Wikipedia. It doesn’t have an article on everything, but if it did there would be no need for this Top 5.

3. Ask Reddit

For the more casual and fun questions, you have Ask Reddit. If you’re not familiar with Reddit, it’s a social news site with a dedicated user base.

Those users make Ask Reddit a good tool to get answers, but most of the questions you find tend to fall on the light side of things.

You can learn how to cope with putting down your old cat, combat your extreme paranoia, and find out how many people feel Christmas isn’t worth it anymore, making the tool more interesting to read when you’re bored than the best tool to find the answer you’re looking for.

In the event you have a question that fits the topics floating around Ask Reddit, however, you’ll have plenty of people to join in and answer.

2. Duck Duck Go

Duck Duck Go is a clever search engine that provides tons of shortcuts to help you find what you’re looking for very quickly. The idea is to get you your information without the need to click around too much.

Need a color swatch for a particular HEX value? Just enter the HEX value in Duck Duck Go and you’ll get it. It can even help you quickly generate a strong, random password.

Although search, in general, is pretty fast, Duck Duck Go has a tool set to help you get answers and information as quickly as possible.

1. Aardvark

Aardvark lets you ask just about any question and receive an answer in under a minute—for free. Aardvark aims to keep the process simple by keeping your questions short and sweet.

You ask a question that’s about the length of a tweet and you get an answer that isn’t much longer from helpers whose interests match that of the question.

In return, you’re encouraged to answer questions that fall into your area of expertise.




5. Wolfram Alpha

Aardvark is possible because of this information exchange and generally works very well, although it did fail to find a good soft-serve ice cream shop in Los Angeles.

I guess I’ll have to settle for Tasti D-Lite, whenever it finally shows up. But why is Aardvark number one? Because it effectively does the same thing as Twitter, but without the need for a base of followers.

It does a fantastic job at matching your question with relevant, helpful people and it does it fast.

Even though it couldn’t do the impossible and find soft server ice cream in Los Angeles, it’s probably the best question and answer service you could ask for.

Please like, share and tweet this article.

Pass it on: Popular Science

What Is Alphabet’s Chronicle?

Google’s parent company Alphabet has launched a business that will specialize in leveraging machine learning in cyber security, called Chronicle.

Chief executive Stephen Gillett says that the company will be split in two. On the one hand it will provide a cyber security and analytics platform that will target enterprise customers and help them “better manage and understand their own security-related data“.

The other side of the business will be VirusTotal, which is a malware intelligence service Google picked up in 2012. This will continue to operate as it has been doing.




For some years now a slew of security vendors have touted machine learning as their key differentiator against rivals on the market. There is an aspect of snake oil to some of it – see our market analysis here.

But there are also companies like the darlings of the infosec market at the moment, Darktrace, that are using genuine machine learning for threat detection.

It’s no secret that Alphabet and Google are at the frontier of machine learning and artificial intelligence.

Writing in Medium, Chronicle CEO Stephen Gillett says that while the company will be an independent business under the Google umbrella, it will “have the benefit of being able to consult the world-class experts in machine learning and cloud computing” at Alphabet.

Where did Chronicle come from?

Chronicle emerged from the labs of Alphabet’s mysteriously named X – Google’s incubation hub for ‘moonshot’ projects, in 2016, plus VirusTotal which Google bought in 2012.

CEO Stephen Gillet began working at Google in 2015 and has a history of work in cyber security companies.

Other people in leadership roles at Chronicle include Mike Wiacek and Shapor Naghibzadeh, who together have more than 20 years of security experience at Google.

Bernardo Quintero of VirusTotal will continue to work with Chronicle.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s $30 Million Moon Race Ends With No Winner

It’s official: The $30 million Google Lunar X Prize is no more.

After close consultation with our five finalist Google Lunar X Prize teams over the past several months, we have concluded that no team will make a launch attempt to reach the moon by the March 31, 2018, deadline,” X Prize founder and chairman Peter Diamandis said in a joint statement today (Jan. 23) with Marcus Shingles, the organization’s CEO.

This literal ‘moonshot’ is hard, and while we did expect a winner by now, due to the difficulties of fundraising, technical and regulatory challenges, the grand prize of the $30M Google Lunar X Prize will go unclaimed,” they added.




The acknowledgement confirms news broken yesterday by CNBC.

The Google Lunar X Prize (GLXP) was announced in 2007, with the stated aim of encouraging commercial spaceflight and exploration.

The contest challenged privately funded teams to put a robotic spacecraft on the moon, move the craft 1,640 feet (500 meters), and have it beam high-definition photos and video back to Earth.

The first team to do this would win the $20 million grand prize. The second-place team would get $5 million, and an additional $5 million was available for various special accomplishments, bringing the total purse to $30 million.

The GLXP has awarded more than $6 million so far, for various milestones that teams have achieved. Milestone prizes would count toward, and not boost, the total purse taken home by first- or second-place teams.

So, the money given out by the GLXP would not have topped $30 million.

 

The deadline was originally the end of 2012, but GLXP representatives pushed it back several times, finally to March 31 of this year.

Google apparently did not want to grant another extension — but that doesn’t necessarily mean the moon race is completely off.

X Prize is exploring a number of ways to proceed from here,” Diamandis and Shingles said in today’s statement.

This may include finding a new title sponsor to provide a prize purse following in the footsteps of Google’s generosity, or continuing the Lunar X Prize as a noncash competition where we will follow and promote the teams and help celebrate their achievements.

Several dozen teams threw their hats into the ring over the course of the decade-long GLXP competition, but that pool was finally whittled down to five finalists: Florida-based Moon Express, Japan’s Team Hakuto, SpaceIL from Israel, India’s Team Indus and international outfit Synergy Moon.

Several of these teams have stressed that the GLXP, while a helpful spur, was not the main reason for their existence.

And Moon Express CEO Bob Richards wrote the following words earlier this month, as part of an op-ed for Space News: “The competition was a sweetener in the landscape of our business case, but it’s never been the business case itself. 

“We continue to focus on our core business plans of collapsing the cost of access to the moon, our partnership with NASA, and our long-term vision of unlocking lunar resources for the benefit of life on Earth and our future in space.

Team Hakuto may yet have a lunar legacy as well: The company is run by the Tokyo-based startup iSpace, which also plans to exploit lunar resources. iSpace recently raised $90 million in investment funding to help it achieve this goal.

We are inspired by the progress of the Google Lunar X Prize teams and will continue to support their journey, one way or another, and will be there to help shine the spotlight on them when they achieve that momentous goal,” Diamandis and Shingles said in today’s statement.

Please like, share and tweet this article.

Pass it on: New Scientist

Microsoft’s Cortana Falls Behind Alexa And Google Assistant at Consumer Electronics Show

The annual Consumer Electronics Show is always a good opportunity to get an early look at devices coming throughout the year.

It’s also a reasonable gauge on the health of an ecosystem, or emerging platforms. At this year’s CES it was all about Alexa vs. Google Assistant.

If you were hoping to see more Cortana-powered devices, they were nowhere to be found. With the exception of the Cortana-powered thermostat (announced last year), no new Cortana devices were unveiled at CES this year.

In comparison, Alexa is arriving on headphones, smartwatches, cars, and many more TVs this year, and will even be able to directly control ovens and microwaves.




Google introduced a new Smart Display platform with its Assistant, and Google Assistant is also coming to more TVs, headphones, and even Android Auto.

Google made it clear it was ready to fight Alexa, but Microsoft stayed silent.

Microsoft’s Cortana digital assistant has been largely limited to Windows 10 PCs, after originally launching on Windows Phones back in 2014.

Microsoft may have missed the hardware scenario for a dedicated Cortana device, but the company has invested in pushing Cortana on Windows 10.

Despite a claim of 141 million monthly Cortana users, Amazon looks set to even challenge Microsoft in this area.

HP, Lenovo, Asus, and Acer all plan on integrating an Alexa app on upcoming Windows 10 machines this year, providing a challenge to Cortana on the desktop.

Microsoft has been convincing PC makers to integrate far-field microphones in their devices, and now Amazon is tempting them to use that hardware for Alexa.

Microsoft has previously shown how Cortana can work in speakers, cars, fridges, toasters, and thermostats, but we’ve only seen one dedicated Cortana speaker so far and a single thermostat.

With a lack of hardware supporting Cortana, Microsoft is instead promising that more will come in time.

In fact, Microsoft says it’s playing the long game with Cortana, something it also unsuccessfully attempted with Windows Phone.

It’s a long journey to making a real assistant that you can communicate with over a longer period of time to really be approachable and interesting and better than the alternative,” explains Andrew Shuman, corporate vice president of Cortana engineering, in an interview with GeekWire.

“That is our journey, to make some make some great experiences that shine through, and recognize that long haul.”

Microsoft has announced new partnerships with Ecobee, Geeni, Honeywell, IFTTT, LIFX, and TP-Link, but we now need to see the hardware evidence of Microsoft’s long haul.

While Alexa and Google Assistant appear on more and more devices, Cortana is being left behind. Microsoft’s Cortana isn’t the only digital assistant being left behind, though.

Apple’s Siri, which debuted long before Alexa, Cortana, and Google Assistant, has remained firmly on the company’s iPhone devices.

Apple has been pushing its HomeKit platform instead of Siri, but there are signs this isn’t working for Apple’s ecosystem.

As analyst Ben Bajarin points out, Apple usually has an indirect presence at CES, but this year it was Alexa and Google Assistant dominating the platform wars.

Apple delayed its HomePod speaker to “early 2018,” and we’re waiting to see if the company will ever create a Siri platform outside of its own devices.

While HomeKit has broad support for smart home devices, it’s clear that millions of people are using voice-activated smart speakers to control smart home devices, music playback, and access online information like weather forecasts.

It’s a segment that’s growing, and both Apple and Microsoft are both far behind the competition.

Please like, share and tweet this article.

Pass it on: New Scientist

Google’s DeepMind AI Fakes Some Of The Most Realistic Human Voices Yet

Google’s DeepMind artificial intelligence has produced what could be some of the most realistic-sounding machine speech yet.

WaveNet, as the system is called, generates voices by sampling real human speech and directly modeling audio waveforms based on it, as well as its previously generated audio.

In Google’s tests, both English and Mandarin Chinese listeners found WaveNet more realistic than other types of text-to-speech programs, although it was less convincing than actual human speech.




If that weren’t enough, it can also play the piano rather well.

Text-to-speech programs are increasingly important for computing, as people begin to rely on bots and AI personal assistants like Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa, and the Google Assistant.

If you ask Siri or Cortana a question, though, they’ll reply with actual recordings of a human voice, rearranged and combined in small pieces.

This is called concatenative text to speech, and as one expert puts it, it’s a little like a ransom note.

The results are often fairly realistic, but as Google writes, producing a new audio persona or tone of voice requires having an actor record every possible sound in a database. Here’s one phrase, created by Google.

The alternative is parametric text to speech — building a completely computer-generated voice, using coded rules based on grammar or mouth sounds.

Parametric voices don’t need source material to produce words. But the results, at least in English, are often stilted and robotic. You can hear that here.

Google’s system is still based on real voice input. But instead of chopping up recordings, it learns from them, then independently creates its own sounds in a variety of voices. The results are something like this.

Granted, there’s already plenty of generative music, and it’s not nearly as complicated as making speech that humans will recognize as their own.

On a scale from 1 (not realistic) to 5 (very realistic), listeners in around 500 blind tests rated WaveNet at 4.21 in English and 4.08 in Mandarin.

While even human speech didn’t get a perfect 5, it was still higher, at 4.55 in English and 4.21 in Mandarin. On the other hand, WaveNet outperformed other methods by a wide margin.

Please like, share and tweet this article.

Pass it on: New Scientist

How To Get Internet To Isolated Puerto Rico? With Balloons.

More than one month after Hurricane Maria decimated Puerto Rico and the U.S. Virgin Islands, cell phone communication and connection to the internet remain sorely lacking.

Enter Project Loon, the internet-beaming balloons from X, the “moonshot factory” run by Google’s parent company, Alphabet, which have provided a huge boost to getting the affected U.S. territories back online.

The balloons launched from the Nevada desert over the weekend and traveled the 3,500 miles by sky to reach the stratosphere over Puerto Rico. Algorithms are keeping them in position where the need is greatest.




At least 66 percent of cellular sites were out of service in Puerto Rico and 55.4 percent are out of service on the U.S. Virgin Islands, according to a status report released on Monday by the FCC.

This is the first time we have used our new machine learning powered algorithms to keep balloons clustered over Puerto Rico, so we’re still learning how best to do this,” a blog post from X said.

As we get more familiar with the constantly shifting winds in this region, we hope to keep the balloons over areas where connectivity is needed for as long as possible.”

X received permission from the FCC earlier this month to deploy the balloons 12.5 miles over the ground in Puerto Rico. However, deploying them and bringing connectivity wasn’t exactly simple.

Earlier this year, the moonshot factory had success connecting people in Peru during a time of torrential rain and flooding.

In that case, X had an advantage in rapidly getting Peruvians connected because it had already been working with a local carrier on testing the technology.

But this time, X had to quickly work with partners to integrate Loon into their networks, ensuring the system would work once it was deployed. X is working with AT&T in Puerto Rico to deploy internet to the hardest hit parts of the island.

That means some people on the ground with LTE-enabled devices will get basic connectivity, enough to send texts and emails and get some internet access.

Loon is still a work in progress, but having it up and running in Puerto Rico could potentially allow X to work out any potential snags.

Project Loon is still an experimental technology and we’re not quite sure how well it will work,” X freely acknowledged in its blog post.

But we hope it helps get people the information and communication they need to get through this unimaginably difficult time.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s Getting Serious About Building Its Own iPhone

Google unveiled its first custom-designed smartphone chip on Tuesday, the Pixel Visual Core, which is used in its new Pixel 2 and Pixel 2 XL smartphones.

The Pixel Visual Core enables smartphones to take better pictures using HDR+, a technology that can take clear pictures even if there’s a lot of brightness and darkness in the same shot.




One example might be taking a picture of a shadowy skyscraper against a bright blue sky.

With HDR+, you’ll be able to capture both the skyscraper and the blue sky, without bits of either washing out because of parts of the image being too bright or too dark.

While the chip exists in current phones, it isn’t yet activated, but will be in a future software release.

The Pixel 2 and Pixel XL 2 aren’t the first smartphones to offer HDR support, but Google is trying to make its photos the best using the new processor.

Google said that the Pixel Visual Core will be accessible by camera applications created by other developers, not just the built-in camera app, and that it plans to activate access to the core through software updates “in the coming months.

Please like, share and tweet this article.

Pass it on: Popular Science

Self-Driving Cars Let’s You Choose Who Survives In A Crash

One of the issues of self-driving vehicles is legal liability for death or injury in the event of an accident.

If the car maker programs the car so the driver has no choice, is it likely the company could be sued over the car’s actions.

One way around this is to shift liability to the car owner by allowing them to determine a set of values or options in the event of an accident.

People are likely to want to have the option to choose how their vehicle behaves, both in an emergency and in general, so it seems the issue of adjustable ethics will become real as robotically controlled vehicles become more common.

With self-driving vehicles already legal to drive on public roads in a growing number of US states, the trend is spreading around the world. The United Kingdom will allow these vehicles from January 2015.

Before there is widespread adoption, though, people will need to be comfortable with the idea of a computer being in full control of their vehicle.




Much progress towards this has been made already.

A growing number of cars, including mid-priced Fords, have an impressive range of accident-avoidance and driver-assist technologies like adaptive cruise control, automatic braking, lane-keeping and parking assist.

People who like driving for its own sake will probably not embrace the technology.

But there are plenty of people who already love the convenience, just as they might also opt for automatic transmission over manual.

After almost 500,000km of on-road trials in the US, Google’s test cars have not been in a single accident while under computer control.

Computers have faster reaction times and do not get tired, drunk or impatient. Nor are they given to road rage.

But as accident-avoidance and driver-assist technologies become more sophisticated, some ethical issues are raising their heads.

The question of how a self-driven vehicle should react when faced with an accident where all options lead to varying numbers of deaths of people was raised earlier this month.

This is an adaptation of the “trolley problem” that ethicists use to explore the dilemma of sacrificing an innocent person to save multiple innocent people; pragmatically choosing the lesser of two evils.

An astute reader will point out that, under normal conditions, the car’s collision-avoidance system should have applied the brakes before it became a life-and-death situation.

That is true most of the time, but with cars controlled by artificial intelligence (AI), we are dealing with unforeseen events for which no design currently exists.

Let’s say the car maker is successful in deflecting liability. In that case, the user becomes solely responsible whether or not they have a well-considered code of ethics that can deal with life-and-death situations.

Code of ethics or not, in a recent survey it turns out that 44% of respondents believe they should have the option to choose how the car will behave in an emergency.

About 33% thought that government law-makers should decide. Only 12% thought the car maker should decide the ethical course of action.

In Lin’s view it falls to the car makers then to create a code of ethical conduct for robotic cars.

This may well be good enough, but if it is not, then government regulations can be introduced, including laws that limit a car maker’s liability in the same way that legal protection for vaccine makers was introduced because it is in the public interest that people be vaccinated.

In the end, are not the tools we use, including the computers that do things for us, just extensions of ourselves? If that is so, then we are ultimately responsible for the consequences of their use.

Please like, share and tweet this article.

Pass it on: New Scientist