Tag: Google

MIT Invented A Tool That Allows Driverless Cars To Navigate Rural Roads Without A Map

Google has spent the last 13 years mapping every corner and crevice of the world.

Car makers haven’t got nearly as long a lead time to perfect the maps that will keep driverless cars from sliding into ditches or hitting misplaced medians if they want to meet their optimistic deadlines.

This is especially true in rural areas where mapping efforts tend to come last due to smaller demand versus cities.

It’s also a more complicated task, due to a lack of infrastructure (i.e. curbs, barriers, and signage) that computers would normally use as reference points.

That’s why a student at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) is developing new technology, called MapLite, that eliminates the need for maps in self-driving car technology altogether.




This could more easily enable a fleet-sharing model that connects carless rural residents and would facilitate intercity trips that run through rural areas.

In a paper posted online on May 7 by CSAIL and project partner Toyota, 30-year-old PhD candidate Teddy Ort—along with co-authors Liam Paull and Daniela Rus—detail how using LIDAR and GPS together can enable self-driving cars to navigate on rural roads without having a detailed map to guide them.

The team was able to drive down a number of unpaved roads in rural Massachusetts and reliably scan the road for curves and obstacles up to 100 feet ahead, according to the paper.

Our method makes no assumptions about road markings and only minimal assumptions about road geometry,” wrote the authors in their paper.

Once the technology is perfected, proponents argue that autonomous cars could also help improve safety on rural roads by reducing the number of impaired and drowsy drivers, eliminating speeding, and detecting and reacting to obstacles even on pitch-black roads.

Ort’s algorithm isn’t commercializable yet; he hasn’t yet tested his algorithm in a wide variety of road conditions and elevations.

Still, if only from an economic perspective it’s clear repeatedly visually capturing millions of miles of roads to train cars how to drive autonomously isn’t going to be winning mapping technology for AVs; it’s just not feasible for most organizations.

Whether it’s Ort’s work, or end-to-end machine learning, or some other technology that wins the navigation race for autonomous vehicles, it’s important to remember that maps are first and foremost a visual tool to aid sighted people in figuring out where to go.

Like humans, a car may not necessarily need to “see” to get to where it’s going—it just needs to sharpen its other senses.

Please like, share and tweet this article.

Pass it on: Popular Science

At Google I/O 2018, Expect All AI All The Time

 

For Google, its annual I/O developer conference isn’t just a place to show off the next major version of Android and get coders excited about building apps.

Though that stuff is a big part of the show, I/O is also a chance for Google to flex its AI muscle and emphasize its massive reach at a time when every major tech company is racing to best each other in artificial intelligence.

And with its emphasis on cloud-based software and apps, I/O is the most important event of the year for Google—as least as long as its hardware efforts are still such a small fraction of its overall business.




Android P Is For … Probably?

Just like every year, Android will be front and center at the 2018 edition of IO. It’s almost a guarantee that we’ll see a new version of Android P, which was first released as a developer preview in March.

far, we know that a lot of the changes from Android O to P have been visual in nature; notifications have been redesigned, and the quick settings menu has gotten a refresh.

There’s also been a lot of chatter around “Material Design 2,” the next iteration of Google’s unifying design language.

Material Design was first unveiled at I/O four years ago, so it’s quite possible we’ll see the next version’s formal.

Newly redesigned Chrome tabs have already been spotted as part of a possible Material Design refresh, along with references to a “touch optimized” Chrome.

Talkin’ About AI

But artificial intelligence, more than Android and Chrome OS, is likely to be the thread that weaves every platform announcement at I/O together.

Whether that’s in consumer-facing apps like Google Assistant and Google Photos, cloud-based machine learning engines like TensorFlow, or even keynote mentions of AI’s impact on jobs.

Speaking of Google Assistant, late last week Google shared some notable updates around the voice-powered digital helper, which now runs on more than 5,000 devices and even allows you to purchase Fandango tickets with your voice.

That’s all well and fun, but one of the most critical aspects of any virtual assistant (in addition to compatibility) is how easy it is to use.

It wouldn’t be entirely surprising to see Google taking steps to make Assistant that much more accessible, whether that’s through software changes, like “slices” of Assistant content that shows up outside of the app, or hardware changes that involve working with OEM partners to offer more quick-launch solutions.

Google’s day-one keynote kicks off today, Tuesday May 8, at 10 am Pacific time.

Please like, share and tweet this article.

Pass it on: Popular Science

Larry Page’s Kitty Hawk Unveils Autonomous Flying Taxis

Autonomous flying taxis just took one big step forward to leaping off the pages of science fiction and into the real world, thanks to Google co-founder Larry Page’s Kitty Hawk.

The billionaire-backed firm has announced that it will begin the regulatory approval process required for launching its autonomous passenger-drone system in New Zealand, after conducting secret testing under the cover of another company called Zephyr Airworks.

The firm’s two-person craft, called Cora, is a 12-rotor plane-drone hybrid that can take off vertically like a drone, but then uses a propeller at the back to fly at up to 110 miles an hour for around 62 miles at a time.




The all-electric Cora flies autonomously up to 914 metres (3,000ft) above ground, has a wingspan of 11 metres, and has been eight years in the making.

Kitty Hawk is personally financed by Page and is being run by former Google autonomous car director Sebastian Thrun. The company is trying to beat Uber and others to launching an autonomous flying taxi service.

The company hopes to have official certification and to have launched a commercial service within three years, which will make it the first to do so.

But its achievement will also propel New Zealand to the front of the pack as the first country to devise a certification process.

The country’s aviation authority is well respected in the industry, and is seen as pioneering.

Kitty Hawk is already working on an app and technology to allow customers to hail flying taxis as they would an Uber, but whether Page, Thrun and their team will actually be able to deliver within three years remains to be seen.

Many companies have promised great leaps but failed to deliver meaningful progress towards a Jetsons-like future, from Uber’s Elevate to China’s Ehang.

Even if Kitty Hawk hits all its projected milestones and launches commercially, there’s then the matter of persuading people to actually use it.

Please like, share and tweet this article.

Pass it on: Popular Science

Google Clips: A Smart Camera That Doesn’t Make The Grade

Picture this: you’re hanging out with your kids or pets and they spontaneously do something interesting or cute that you want to capture and preserve.

But by the time you’ve gotten your phone out and its camera opened, the moment has passed and you’ve missed your opportunity to capture it.

That’s the main problem that Google is trying to solve with its new Clips camera, a $249 device available starting today that uses artificial intelligence to automatically capture important moments in your life.

Google says it’s for all of the in-between moments you might miss when your phone or camera isn’t in your hand.




It is meant to capture your toddler’s silly dance or your cat getting lost in an Amazon box without requiring you to take the picture.

The other issue Google is trying to solve with Clips is letting you spend more time interacting with your kids directly, without having a phone or camera separating you, while still getting some photos.

That’s an appealing pitch to both parents and pet owners alike, and if the Clips camera system is able to accomplish its goal, it could be a must-have gadget for them.

But if it fails, then it’s just another gadget that promises to make life easier, but requires more work and maintenance than it’s worth.

The problem for Google Clips is it just doesn’t work that well.

Before we get into how well Clips actually works, I need to discuss what it is and what exactly it’s doing because it really is unlike any camera you’ve used before.

At its core, the Clips camera is a hands-free automatic point-and-shoot camera that’s sort of like a GoPro, but considerably smaller and flatter.

It has a cute, unassuming appearance that is instantly recognizable as a camera, or at least an icon of a camera app on your phone.

Google, aware of how a “camera that automatically takes pictures when it sees you” is likely to be perceived, is clearly trying to make the Clips appear friendly, with its white-and-teal color scheme and obvious camera-like styling.

But of those that I showed the camera to while explaining what it’s supposed to do, “it’s creepy” has been a common reaction.

One thing that I’ve discovered is that people know right away it’s a camera and react to it just like other any camera.

That might mean avoiding its view when they see it, or, like in the case of my three-year-old, walking up to it and smiling or picking it up.

That has made it tough to capture candids, since, for the Clips to really work, it needs to be close to its subject.

Maybe over time, your family would learn to ignore it and those candid shots could happen, but in my couple weeks of testing, my family hasn’t acclimated to its presence.

The Clips’ camera sensor can capture 12-megapixel images at 15 frames per second, which it then saves to its 16GB of internal storage that’s good for about 1,400 seven-second clips.

The battery lasts roughly three hours between charges.

Included with the camera is a silicone case that makes it easy to prop up almost anywhere or, yes, clip it to things. It’s not designed to be a body camera or to be worn.

Instead, it’s meant to be placed in positions where it can capture you in the frame as well.

There are other accessories you can buy, like a case that lets you mount the Clips camera to a tripod for more positioning options, but otherwise, using the Clips camera is as simple as turning it on and putting it where you want it.

Once the camera has captured a bunch of clips, you use the app to browse through them on your phone, edit them down to shorter versions, grab still images, or just save the whole thing to your phone’s storage for sharing and editing later.

The Clips app is supposed to learn based on which clips you save and deem “important” and then prioritize capturing similar clips in the future.

You can also hit a toggle to view “suggested” clips for saving, which is basically what the app thinks you’ll like out of the clips it has captured.

Google’s definitely onto something here. The idea is an admirable first step toward a new kind of camera that doesn’t get between me and my kids. But first steps are tricky — ask any toddler!

Usually, after you take your first step, you fall down. To stand back up, Google Clips needs to justify its price, the hassle of setting it up, and the fiddling between it and my phone.

It needs to reassure me that by trusting it and putting my phone away, I won’t miss anything important, and I won’t be burdened by having to deal with a lot of banal captures.

Otherwise, it’s just another redundant gadget that I have to invest too much time and effort into managing to get too little in return.

That’s a lot to ask of a tiny little camera, and this first version doesn’t quite get there. To live up to it all, Clips needs to be both a better camera and a smarter one.

Please like, share and tweet this article.

Pass it on: Popular Science

Megapixels Don’t Matter Anymore. Here’s Why More Isn’t Always Better.

For years, smartphone makers have been caught up in a megapixel spec race to prove that their camera is better than the next guy’s.

But we’ve finally come to a point where even the lower-end camera phones are packing more megapixels than they need, so it’s getting harder to differentiate camera hardware.

Without that megapixel crutch to fall back on, how are we supposed to know which smartphone has the best camera?

Well thankfully, there are several other important specs to look for in a camera, and it’s just a matter of learning which ones matter the most to you.




Why Megapixels Don’t Matter Anymore

The term “megapixel” actually means “one million pixels,” so a 12-megapixel camera captures images that are comprised of 12,000,000 tiny little dots.

A larger number of dots (pixels) in an image means that the image has more definition and clarity, which is also referred to as having a higher resolution.

This might lead you to believe that a camera with more megapixels will take better pictures than a camera with fewer megapixels, but that’s not always the case.

The trouble is, we’ve reached a point where all smartphone cameras have more than enough megapixels.

For instance, a 1080p HD TV has a resolution of 2.1 megapixels, and even the highest-end 4K displays top out at 8.3 megapixels.

Considering that nearly every smartphone camera has a double-digit megapixel rating these days, your photos will be in a higher resolution than most screens can even display.

Simply put, you won’t be able to see any difference in resolution between pictures taken by two different smartphone cameras, because most screens you’ll be viewing them on aren’t capable of displaying that many megapixels.

Really, anything greater than 8.3 megapixels is only helpful for cropping. In other words, if your phone takes 12-megapixel photos, you can crop them by roughly 50%, and the resolution will still be just as high as a 4K TV.

Pixel Size Is the Real Difference Maker

The hot new number to judge your phone’s camera by is the pixel size. You’ll see this spec listed as a micron value, which is a number followed by the symbol “µm.”

A phone with a 1.4µm pixel size will almost always capture better pictures than one with a 1.0µm pixel size, thanks to physics.

If you zoomed in far enough on one of your photos, you could see the individual pixels, right? Well, each of those tiny little dots was captured by microscopic light sensors inside your smartphone’s camera.

These light sensors are referred to as “pixels” because, well, they each capture a pixel’s worth of light. So if you have a 12-megapixel camera, the actual camera sensor has twelve million of these light-sensing pixels.

Each of these pixels measures light particles called photons to determine the color and brightness of the corresponding pixel in your finished photo.

When a bright blue photon hits one of your camera’s light sensors, it tells your phone to make a dot with bright blue coloring.

Put twelve million of these dots together in their various brightness and colors, then you’ll end up with a picture.

A Little Aperture Goes a Long Way

The next key spec to look for is the camera’s aperture, which is represented as f divided by a number (f/2.0, for example).

Because of the “f divided by” setup, this is one of those rare specs where a smaller number is always better than a larger one.

To help you understand aperture, let’s go back to pixel size for a second.

If larger pixels mean your camera can collect more light particles to create more accurate photos, then imagine pixels as a bucket, and photons as falling rain.

The bigger the opening of the bucket (pixel), the more rain (photons) you can collect, right?

Well aperture is like a funnel for that bucket. The bottom of this imaginary funnel has the same diameter as the pixel bucket, but the top is wider—which means you can collect even more photons.

In this analogy, a wider aperture gives the photon bucket a wider opening, so it focuses more light onto your camera’s light-sensing pixels.

Image Stabilization: EIS vs. OIS

With most spec sheets, you’ll see a camera’s image stabilization technology listed as either EIS or OIS. These stand for Electronic Image Stabilization and Optical Image Stabilization, respectively.

OIS is easier to explain, so let’s start with that one. Simply put, this technology makes it to where your camera sensor physically moves to compensate for any shaking while you’re holding your phone.

If you’re walking while you’re recording a video, for instance, each of your steps would normally shake the camera—but OIS ensures that the camera sensor remains relatively steady even while the rest of your phone shakes around it.

In general, though, it’s always better to have a camera with OIS.

For one, the cropping and stretching can reduce quality and create a “Jello effect” in videos, but in addition to that, EIS has little to no effect on reducing blur in still photos.

Now that you’ve got a better understanding about camera specs, have you decided which smartphone you’re going to buy next?

If you’re still undecided, you can use our smartphone-buyer’s flowchart at the following link, and if you have any further questions, just fire away in the comment section below.

Please like, share and tweet this article.

Pass it on: Popular Science

What Is Private Browsing And Why Should You Use it?

Since 2008, January 28th has been set aside for Data Privacy Day. The goal: “to create awareness about the importance of privacy and protecting personal information.

It’s the perfect time to take a look at one privacy feature that’s right in front of you: your web browser’s private browsing mode. Just what is it that makes private browsing private? Let’s take a look at the major browsers and see.




Google Chrome

Google Chrome calls it Incognito Mode, and you can tell you’re using it by looking for the “secret agent” icon in the top left corner of the window.

Chrome also shows you a big, bold new tab page when you open an Incognito window. That’s it at the top of this post.

In Incognito Mode, Chrome won’t keep track of the pages you visit, the data you enter into forms, or any searches you submit.

It won’t remember what files you download, but those files will stay on your computer after you close the Incognito window. You’ll have to manually delete them if you want them gone. The same goes for bookmarks you create.

Internet Explorer and Edge

Internet Explorer and Edge feature InPrivate browsing. The same caveats apply: temporary internet files like cookies, browsing history, form data) are not saved.

Downloaded files and bookmarks stick around even after you close the InPrivate window.

Microsoft’s browsers also disable any third-party toolbars you might have installed when you start an InPrivate session.

Firefox

Mozilla welcomes you to Firefox’s Private Browsing mode with a nice, clear explanation of what it does and doesn’t do.

The list pretty much lines up with Chrome, IE, and Edge: browsing/search history and cookies are not saved, downloads and bookmarks are.

Mozilla also gives you an additional setting that can make Private Browsing a little more private:tracking protection. Turn it on and Firefox will attempt to prevent sites from gathering data about your browsing habits.

Safari

Safari’s private browsing mode also removes temporary files when you close the window. Browsing history, form data, and cookies are all wiped by default.

Opera

Opera is noteworthy because its private browsing mode offers one truly unique feature. You can turn on a VPN connection to add another layer of secrecy to your browsing activities.

It’s not a bulletproof VPN solution and it still doesn’t keep your activities totally private, but it does provide additional protection.

It may also technically be considered a proxy and not a true VPN, but that’s a discussion you can leave to the more technically-inclined folks.

Beyond the VPN, Opera’s private browsing mode works like Chrome’s.

How Private Is It?

The short answer is not very, regardless of which browser you use. On the computer, tablet, or phone you’re using, yes, your temporary browsing data is removed.

It’s still very possible to see what you’ve been doing. Routers, firewalls, and proxy servers could be keeping tabs on your browsing activities, and private browsing mode won’t get in the way of that.

If you’re thinking private browsing will keep your activities hush-hush at the office, for example, you’re probably wrong.

Please like, share and tweet this article.

Pass it on: Popular Science

How HTTPS Website Security Is Making the Internet Safer From Hackers

You may have noticed in your travels around the internet that your browser’s address bar occasionally turns green and displays a padlock—that’s HTTPS, or a secure version of the Hypertext Transfer Protocol, swinging into action.

This little green padlock is becoming vitally important as more and more of your online security is eroded.

Just because your ISP can now see what sites you browse on doesn’t mean they have to know all the content your consuming.

Below is the rundown on HTTPS, so you can better understand this first, and easiest line of defense against potential snoopers and hackers.

HTTP or the Hypertext Transfer Protocol is the universally-agreed-upon coding structure that the web is built on.




Hypertext is the basic idea of having plain text with embedded links you can click on; the Transfer Protocol is a standard way of communicating it.

When you see HTTP in your browser you know you’re connecting to a standard, run-of-the-mill website, as opposed to a different kind of connection, like FTP (File Transfer Protocol), which is often used by file storage databases.

The protocol before a web address tells your browser what to expect and how to display the information it finds. So what about the extra S in HTTPS?

The S is simple. It means Secure.

It originally stood for Secure Sockets Layer (SSL) which is now part of a broader security protocol called Transport Layer Security (TLS).

TLS is part of the two layers that make up HTTPS, the other being traditional HTTP.

TLS works to verify that the website you’ve loaded up is actually the website you wanted to load up—that the Facebook page you see before you really is Facebook and not a site pretending to be Facebook.

On top of that, TLS encrypt all of the data you’re transmitting (like apps such as Signal or WhatsApp do).

Anyone who happens across the traffic coming to or from your computer when it’s connected to an HTTPS site can’t make sense of it—they can’t read it or alter its contents.

So if someone wants to catch the username and password you just sent to Google, or wants to throw up a webpage that looks like Instagram but isn’t, or wants to jump in on your email conversations and change what’s being said, HTTPS helps to stop them.

It’s obvious why login details, credit card information, and the like is better encrypted rather than sent in plain text—it makes it much harder to steal.

In 2017, if you come across a shopping or banking site, or any webpage that asks you to log in, it should have HTTPS enabled; if not, take your business elsewhere.

Check the details of the app listing and contact the developer directly if you’re worried about whether your connection to the web really is secure inside a mobile app.

So if HTTPS is so great, why not use it for everything? That’s definitely a plan.

There is now a big push to get HTTPS used as standard, but because it previously required extra processing power and bandwidth, it hasn’t always made sense for pages where you’re not entering or accessing any sensitive information.

The latest HTTPS iterations remove most of these drawbacks, so we should see it deployed more widely in the future—although converting old, large sites can take a lot of time.

If you want to stay as secure as possible, the HTTPS Everywhere extension for Chrome and Firefox makes sure you’re always connected to the HTTPS version of a site, where one has been made available, and fixes a few security bugs in the HTTPS approach at the same time.

It’s well worth installing and using, particularly on public Wi-Fi, where unwelcome eavesdroppers are more likely to be trying to listen in.

HTTPS isn’t 100 percent unbeatable—no security measure is—but it makes it much more difficult for hackers to spy on and manipulate sensitive data as it travels between your computer and the web at large, as well as adding an extra check to verify the identity of the sites you visit.

It’s a vital part of staying safe on the web.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s First Mobile Chip Is An Image Processor Hidden In The Pixel 2

One thing that Google left unannounced during its Pixel 2 launch event on October 4th is being revealed today: it’s called the Pixel Visual Core, and it is Google’s first custom system-on-a-chip (SOC) for consumer products.

You can think of it as a very scaled-down and simplified, purpose-built version of Qualcomm’s Snapdragon, Samsung’s Exynos, or Apple’s A series chips. The purpose in this case?

Accelerating the HDR+ camera magic that makes Pixel photos so uniquely superior to everything else on the mobile market.

Google plans to use the Pixel Visual Core to make image processing on its smartphones much smoother and faster, but not only that, the Mountain View also plans to use it to open up HDR+ to third-party camera apps.




The coolest aspects of the Pixel Visual Core might be that it’s already in Google’s devices. The Pixel 2 and Pixel 2 XL both have it built in, but laying dormant until activation at some point “over the coming months.”

It’s highly likely that Google didn’t have time to finish optimizing the implementation of its brand-new hardware, so instead of yanking it out of the new Pixels, it decided to ship the phones as they are and then flip the Visual Core activation switch when the software becomes ready.

In that way, it’s a rather delightful bonus for new Pixel buyers.

The Pixel 2 devices are already much faster at processing HDR shots than the original Pixel, and when the Pixel Visual Core is live, they’ll be faster and more efficient.

Looking at the layout of Google’s chip, which is dubbed an Image Processing Unit (IPU) for obvious reasons, we see something sort of resembling a regular 8-core SOC.

Technically, there’s a ninth core, in the shape of the power-efficient ARM Cortex-A53 CPU in the top left corner.

But the important thing is that each of those eight processors that Google designed has been tailored to handle HDR+ duties, resulting in HDR+ performance that is “5x faster and [uses] less than 1/10th the energy” of the current implementation, according to Google.

This is the sort of advantage a company can gain when it shifts to purpose-specific hardware rather than general-purpose processing.

Google says that it will enable Pixel Visual Core as a developer option in its preview of Android Oreo 8.1, before updating the Android Camera API to allow access to HDR+ for third-party camera devs.

Obviously, all of this tech is limited strictly to the Pixel 2 generation, ruling out current Pixel owners and other Android users.

As much as Google likes to talk about enriching the entire Android ecosystem, the company is evidently cognizant of how much of a unique selling point its Pixel camera system is, and it’s working hard to develop and expand the lead that it has.

As a final note, Google’s announcement today says that HDR+ is only the first application to run on the programmable Pixel Visual Core, and with time we should expect to see more imaging and machine learning enhancements being added to the Pixel 2.

Pleas like, share and tweet this article.

Pass it on: Popular Science

Top 5 Ways To Find Better Answers Online (That Aren’t Google)

You can Google just about anything, but it’s not always your best resource for finding the exact answer to what you want. Here’s a look at our top ten tools for finding better answers online.

You can’t ask Wolfram Alpha anything, but you can ask it for information you can’t find anywhere else. It’s full of information and calculations that no other search engine can provide.

For example, you can use Wolfram Alpha to calculate activity-specific calorie burn, analyze illness symptoms and generic drug options, and make sense of your confusing family relationships.

For more ideas, check out our full Wolfram Alpha coverage, or just play around with it yourself.

 

4. Wikipedia

You might be thinking, “duh.” For that reason it’s pretty much impossible to keep Wikipedia off of a Top 10 list about finding better answers online.

Wikipedia contains an enormous wealth of information and it ought to be your primary destination when you want quick information on a given topic.

While you can’t ask it a specific question, if you know what you’re looking for you’re bound to find it on Wikipedia. It doesn’t have an article on everything, but if it did there would be no need for this Top 5.

3. Ask Reddit

For the more casual and fun questions, you have Ask Reddit. If you’re not familiar with Reddit, it’s a social news site with a dedicated user base.

Those users make Ask Reddit a good tool to get answers, but most of the questions you find tend to fall on the light side of things.

You can learn how to cope with putting down your old cat, combat your extreme paranoia, and find out how many people feel Christmas isn’t worth it anymore, making the tool more interesting to read when you’re bored than the best tool to find the answer you’re looking for.

In the event you have a question that fits the topics floating around Ask Reddit, however, you’ll have plenty of people to join in and answer.

2. Duck Duck Go

Duck Duck Go is a clever search engine that provides tons of shortcuts to help you find what you’re looking for very quickly. The idea is to get you your information without the need to click around too much.

Need a color swatch for a particular HEX value? Just enter the HEX value in Duck Duck Go and you’ll get it. It can even help you quickly generate a strong, random password.

Although search, in general, is pretty fast, Duck Duck Go has a tool set to help you get answers and information as quickly as possible.

1. Aardvark

Aardvark lets you ask just about any question and receive an answer in under a minute—for free. Aardvark aims to keep the process simple by keeping your questions short and sweet.

You ask a question that’s about the length of a tweet and you get an answer that isn’t much longer from helpers whose interests match that of the question.

In return, you’re encouraged to answer questions that fall into your area of expertise.




5. Wolfram Alpha

Aardvark is possible because of this information exchange and generally works very well, although it did fail to find a good soft-serve ice cream shop in Los Angeles.

I guess I’ll have to settle for Tasti D-Lite, whenever it finally shows up. But why is Aardvark number one? Because it effectively does the same thing as Twitter, but without the need for a base of followers.

It does a fantastic job at matching your question with relevant, helpful people and it does it fast.

Even though it couldn’t do the impossible and find soft server ice cream in Los Angeles, it’s probably the best question and answer service you could ask for.

Please like, share and tweet this article.

Pass it on: Popular Science

What Is Alphabet’s Chronicle?

Google’s parent company Alphabet has launched a business that will specialize in leveraging machine learning in cyber security, called Chronicle.

Chief executive Stephen Gillett says that the company will be split in two. On the one hand it will provide a cyber security and analytics platform that will target enterprise customers and help them “better manage and understand their own security-related data“.

The other side of the business will be VirusTotal, which is a malware intelligence service Google picked up in 2012. This will continue to operate as it has been doing.




For some years now a slew of security vendors have touted machine learning as their key differentiator against rivals on the market. There is an aspect of snake oil to some of it – see our market analysis here.

But there are also companies like the darlings of the infosec market at the moment, Darktrace, that are using genuine machine learning for threat detection.

It’s no secret that Alphabet and Google are at the frontier of machine learning and artificial intelligence.

Writing in Medium, Chronicle CEO Stephen Gillett says that while the company will be an independent business under the Google umbrella, it will “have the benefit of being able to consult the world-class experts in machine learning and cloud computing” at Alphabet.

Where did Chronicle come from?

Chronicle emerged from the labs of Alphabet’s mysteriously named X – Google’s incubation hub for ‘moonshot’ projects, in 2016, plus VirusTotal which Google bought in 2012.

CEO Stephen Gillet began working at Google in 2015 and has a history of work in cyber security companies.

Other people in leadership roles at Chronicle include Mike Wiacek and Shapor Naghibzadeh, who together have more than 20 years of security experience at Google.

Bernardo Quintero of VirusTotal will continue to work with Chronicle.

Please like, share and tweet this article.

Pass it on: Popular Science