Month: February, 2018

The Myths Of Robots: They Are Strong, Smart And Evil

Some of today’s top techies and scientists are very publicly expressing their concerns over apocalyptic scenarios that are likely to arise as a result of machines with motives.

Among the fearful are intellectual heavyweights like Stephen Hawking, Elon Musk, and Bill Gates, who all believe that advances in the field of machine learning will soon yield self-aware A.I.s that seek to destroy us.

Or perhaps just apathetically dispose of us, much like scum getting obliterated by a windshield wiper.

In fact, Dr. Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.”




Indeed, there is little doubt that future A.I. will be capable of doing significant damage.

For example, it is conceivable that robots could be programmed to function as tremendously dangerous autonomous weapons unlike any seen before.

Additionally, it is easy to imagine an unconstrained software application that spreads throughout the Internet, severely mucking up our most efficient and relied upon medium for global exchange.

But these scenarios are categorically different from ones in which machines decide to turn on us, defeat us, make us their slaves, or exterminate us.

In this regard, we are unquestionably safe. On a sadder note, we are just as unlikely to someday have robots that decide to befriend us or show us love without being specifically prompted by instructions to do so.

This is because such intentional behavior from an A.I. would undoubtedly require a mind, as intentionality can only arise when something possesses its own beliefs, desires, and motivations.

The type of A.I. that includes these features is known amongst the scientific community as “Strong Artificial Intelligence”. Strong A.I., by definition, should possess the full range of human cognitive abilities.

This includes self-awareness, sentience, and consciousness, as these are all features of human cognition.

On the other hand, “Weak Artificial Intelligence” refers to non-sentient A.I. The Weak A.I. Hypothesis states that our robots—which run on digital computer programs—can have no conscious states, no mind, no subjective awareness, and no agency.

Such A.I. cannot experience the world qualitatively, and although they may exhibit seemingly intelligent behavior, it is forever limited by the lack of a mind.

A failure to recognize the importance of this strong/weak distinction could be contributing to Hawking and Musk’s existential worries, both of whom believe that we are already well on a path toward developing Strong A.I.

To them it is not a matter of “if”, but “when”.

But the fact of the matter is that all current A.I. is fundamentally Weak A.I., and this is reflected by today’s computers’ total absence of any intentional behavior whatsoever.

Although there are some very complex and relatively convincing robots out there that appear to be alive, upon closer examination they all reveal themselves to be as motiveless as the common pocket calculator.

This is because brains and computers work very differently. Both compute, but only one understands—and there are some very compelling reasons to believe that this is not going to change.

It appears that there is a more technical obstacle that stands in the way of Strong A.I. ever becoming a reality.

Please like, share and tweet this article.

Pass it on: Popular Science

How HTTPS Website Security Is Making the Internet Safer From Hackers

You may have noticed in your travels around the internet that your browser’s address bar occasionally turns green and displays a padlock—that’s HTTPS, or a secure version of the Hypertext Transfer Protocol, swinging into action.

This little green padlock is becoming vitally important as more and more of your online security is eroded.

Just because your ISP can now see what sites you browse on doesn’t mean they have to know all the content your consuming.

Below is the rundown on HTTPS, so you can better understand this first, and easiest line of defense against potential snoopers and hackers.

HTTP or the Hypertext Transfer Protocol is the universally-agreed-upon coding structure that the web is built on.




Hypertext is the basic idea of having plain text with embedded links you can click on; the Transfer Protocol is a standard way of communicating it.

When you see HTTP in your browser you know you’re connecting to a standard, run-of-the-mill website, as opposed to a different kind of connection, like FTP (File Transfer Protocol), which is often used by file storage databases.

The protocol before a web address tells your browser what to expect and how to display the information it finds. So what about the extra S in HTTPS?

The S is simple. It means Secure.

It originally stood for Secure Sockets Layer (SSL) which is now part of a broader security protocol called Transport Layer Security (TLS).

TLS is part of the two layers that make up HTTPS, the other being traditional HTTP.

TLS works to verify that the website you’ve loaded up is actually the website you wanted to load up—that the Facebook page you see before you really is Facebook and not a site pretending to be Facebook.

On top of that, TLS encrypt all of the data you’re transmitting (like apps such as Signal or WhatsApp do).

Anyone who happens across the traffic coming to or from your computer when it’s connected to an HTTPS site can’t make sense of it—they can’t read it or alter its contents.

So if someone wants to catch the username and password you just sent to Google, or wants to throw up a webpage that looks like Instagram but isn’t, or wants to jump in on your email conversations and change what’s being said, HTTPS helps to stop them.

It’s obvious why login details, credit card information, and the like is better encrypted rather than sent in plain text—it makes it much harder to steal.

In 2017, if you come across a shopping or banking site, or any webpage that asks you to log in, it should have HTTPS enabled; if not, take your business elsewhere.

Check the details of the app listing and contact the developer directly if you’re worried about whether your connection to the web really is secure inside a mobile app.

So if HTTPS is so great, why not use it for everything? That’s definitely a plan.

There is now a big push to get HTTPS used as standard, but because it previously required extra processing power and bandwidth, it hasn’t always made sense for pages where you’re not entering or accessing any sensitive information.

The latest HTTPS iterations remove most of these drawbacks, so we should see it deployed more widely in the future—although converting old, large sites can take a lot of time.

If you want to stay as secure as possible, the HTTPS Everywhere extension for Chrome and Firefox makes sure you’re always connected to the HTTPS version of a site, where one has been made available, and fixes a few security bugs in the HTTPS approach at the same time.

It’s well worth installing and using, particularly on public Wi-Fi, where unwelcome eavesdroppers are more likely to be trying to listen in.

HTTPS isn’t 100 percent unbeatable—no security measure is—but it makes it much more difficult for hackers to spy on and manipulate sensitive data as it travels between your computer and the web at large, as well as adding an extra check to verify the identity of the sites you visit.

It’s a vital part of staying safe on the web.

Please like, share and tweet this article.

Pass it on: Popular Science

How To Stop That Annoying Autoplaying Video On Your Browsers

 

Are you sick and tired of opening a new web page and being greeted by a loud, obnoxious advertisement? I sure am.

Pop-up and pop-under ads were bad enough, but now it seems like I can hardly go to a site without having a video start up with a blaring voice braying about a great diet, deal, or the like. We’re looking at you, Facebook.

Enough already!




We can live with ads. We can make a living from websites with ads. Even ad-blocking software, like the great open-source Ad-Block Plus, allow Acceptable Ads that don’t shove their way into my face. But, this new wave of yakety-yak ads is driving us crazy.

Fortunately, Chrome, its relatives, and Firefox enable you to stop the noise. Here’s how you do it.

Google Chrome

The Chrome extension MuteTab gives you control over audio in all your browser tabs. While Chrome includes the built-in ability control the sound from your tabs, it can still be a pain to track down which tab is being noisy.

MuteTab makes it easy to see what tabs are talking and lets you mute all of them or just the tabs in the background. You can also set the extension to mute all tabs, background tabs, or incognito tabs by default.

I like this extension a lot.

Firefox

Firefox makes blocking autoplay audio and video easy.

1) Enter “about:config” into the URL bar.

2) If you get a warning message about “This might void your warranty!” continue on.

3) Now, type “autoplay” into the search box.

4) This will bring up a preference named “media.autoplay.enabled.” Double-click it so that the preference changes to False.

Internet Explorer

Internet Explorer (IE) used to make blocking unwanted audio and video easy. All you had to do was run Tools/Safety from the menubar and switch on ActiveX Filtering.

No fuss, no muss. Unfortunately, almost all autoplay displays are now using HTML5, and IE 11 doesn’t need ActiveX to run them.

That means, if you’re using IE 11 or Edge or Apple Safari, you’re out of luck.

There is one radical method that works. Go to a noisy site, then use IE 11 menu bar, and go into Tools/Internet Options/Security. Once there, move to the Restricted site entry and then press the Sites radio button.

From here, add the website to the Restricted list. The next time you visit it, you’ll find that there’s no longer any autoplay content.

You’ll also be missing some other content, but for most sites, the main images and text will still display.

Ideal? No, but it does work.

Hopefully, web browser developers will get on the same page soon and make it easy to turn off all autoplay audio and video content. The advertisers may love it, but the rest of us hate it.

Please like, share and tweet this article.

Pass it on: Popular Science

Has The Age Of Quantum Computing Arrived?

Ever since Charles Babbage’s conceptual, unrealised Analytical Engine in the 1830s, computer science has been trying very hard to race ahead of its time.

Particularly over the last 75 years, there have been many astounding developments – the first electronic programmable computer, the first integrated circuit computer, the first microprocessor.

But the next anticipated step may be the most revolutionary of all.

Quantum computing is the technology that many scientists, entrepreneurs and big businesses expect to provide a, well, quantum leap into the future.

If you’ve never heard of it there’s a helpful video doing the social media rounds that’s got a couple of million hits on YouTube.

It features the Canadian prime minister, Justin Trudeau, detailing exactly what quantum computing means.




Trudeau was on a recent visit to the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, one of the world’s leading centres for the study of the field. D

During a press conference there, a reporter asked him, half-jokingly, to explain quantum computing.

Quantum mechanics is a conceptually counterintuitive area of science that has baffled some of the finest minds – as Albert Einstein said “God does not play dice with the universe” – so it’s not something you expect to hear politicians holding forth on.

Throw it into the context of computing and let’s just say you could easily make Zac Goldsmith look like an expert on Bollywood.

But Trudeau rose to the challenge and gave what many science observers thought was a textbook example of how to explain a complex idea in a simple way.

The concept of quantum computing is relatively new, dating back to ideas put forward in the early 1980s by the late Richard Feynman, the brilliant American theoretical physicist and Nobel laureate.

He conceptualised the possible improvements in speed that might be achieved with a quantum computer. But theoretical physics, while a necessary first step, leaves the real brainwork to practical application.

With normal computers, or classical computers as they’re now called, there are only two options – on and off – for processing information.

A computer “bit”, the smallest unit into which all information is broken down, is either a “1” or a “0”.

And the computational power of a normal computer is dependent on the number of binary transistors – tiny power switches – that are contained within its microprocessor.

Back in 1971 the first Intel processor was made up of 2,300 transistors. Intel now produce microprocessors with more than 5bn transistors. However, they’re still limited by their simple binary options.

But as Trudeau explained, with quantum computers the bits, or “qubits” as they are known, afford far more options owing to the uncertainty of their physical state.

In the mysterious subatomic realm of quantum physics, particles can act like waves, so that they can be particle or wave or particle and wave.

This is what’s known in quantum mechanics as superposition. As a result of superposition a qubit can be a 0 or 1 or 0 and 1. That means it can perform two equations at the same time.

Two qubits can perform four equations. And three qubits can perform eight, and so on in an exponential expansion. That leads to some inconceivably large numbers, not to mention some mind-boggling working concepts.

At the moment those concepts are closest to entering reality in an unfashionable suburb in the south-west corner of Trudeau’s homeland.

Please like, share and tweet this article.

Pass it on: New Scientist

How the U.S. Built The World’s Most Ridiculously Accurate Atomic Clock

Throw out that lame old atomic clock that’s only accurate to a few tens of quadrillionths of a second. The U.S. has introduced a new atomic clock that is three times more accurate than previous devices.

Atomic clocks are responsible for synchronizing time for much of our technology, including electric power grids, GPS, and the watch on your iPhone.

On Apr. 3, the National Institute of Standards and Technology () in Boulder, ColoNISTrado officially launched their newest standard for measuring time using the NIST-F2 atomic clock, which has been under development for more than a decade.

NIST-F2 is accurate to one second in 300 million years,” said Thomas O’Brian, who heads NIST’s time and frequency division, during a press conference April 3.




The clock was recently certified by the International Bureau of Weights and Measures as the world’s most accurate time standard.

The advancement is more than just a feather in the cap for metrology nerds. Precise timekeeping underpins much of our modern world.

GPS, for instance, needs accuracy of about a billionth of a second in order to keep you from getting lost. These satellites rely on high precision coming from atomic clocks at the U.S. Naval Observatory.

GPS, in turn, is used for synchronizing digital networks such as cell phones and the NTP servers that provide the backbone of the internet.

Your smartphone doesn’t display the time to the sixteenth decimal place, but it still relies on the frequency standards coming from NIST’s clocks, which make their measurements while living in a tightly controlled lab environment.

Real world clocks must operate under strained conditions such as temperature swings, significant vibration, or changing magnetic fields that degrade and hamper their accuracy.

It’s important then that the ultimate reference standard has much better performance than the real world technologies.

What will we do once we reach the ability to break down time into super-tiny, hyper-accurate units? Nobody knows.

Please like, share and tweet this article.

Pass it on: New Scientist

Engineers Create New Architecture For Vaporizable Electronics

Engineers from Cornell and Honeywell Aerospace have demonstrated a new method for remotely vaporizing electronics into thin air, giving devices the ability to vanish – along with their valuable data – if they were to get into the wrong hands.

This unique ability to self-destruct is at the heart of an emerging technology known as transient electronics, in which key portions of a circuit, or the whole circuit itself, can discreetly disintegrate or dissolve.

And because no harmful byproducts are released upon vaporization, engineers envision biomedical and environmental applications along with data protection.

There are a number of existing techniques for triggering the vaporization, each with inherent drawbacks.




Some transient electronics use soluble conductors that dissolve when contacted by water, requiring the presence of moisture.

Others disintegrate when they reach a specific temperature, requiring a heating element and power source to be attached.

Cornell engineers have created a transient architecture that evades these drawbacks by using a silicon-dioxide microchip attached to a polycarbonate shell.

Hidden within the shell are microscopic cavities filled with rubidium and sodium biflouride – chemicals that can thermally react and decompose the microchip.

Ved Gund, Ph.D. ’17, led the research as a graduate student in the Cornell SonicMEMS Lab, and said the thermal reaction can be triggered remotely by using radio waves to open graphene-on-nitride valves that keep the chemicals sealed in the cavities.

The encapsulated rubidium then oxidizes vigorously, releasing heat to vaporize the polycarbonate shell and decompose the sodium bifluoride. The latter controllably releases hydrofluoric acid to etch away the electronics,” said Gund.

Amit Lal, professor of electrical and computer engineering, said the unique architecture offers several advantages over previously designed transient electronics, including the ability to scale the technology.

The stackable architecture lets us make small, vaporizable, LEGO-like blocks to make arbitrarily large vanishing electronics,” said Lal.

Gund added that the technology could be integrated into wireless sensor nodes for use in environmental monitoring.

For example, vaporizable sensors can be deployed with the internet of things platform for monitoring crops or collecting data on nutrients and moisture, and then made to vanish once they accomplish these tasks,” said Gund.

Lal, Gund and Honeywell Aerospace were recently issued a patent for the technology, and the SonicMEMS Lab is continuing to research new ways the architecture can be applied toward transient electronics as well as other uses.

Our team has also demonstrated the use of the technology as a scalable micro-power momentum and electricity source, which can deliver high peak powers for robotic actuation,” said Lal.

Fabrication of the polycarbonate shell was completed by Christopher Ober, professor of materials science and engineering, with other components of the architecture provided by Honeywell Aerospace.

Portions of the research were funded under the Defense Advanced Research Projects Agency’s Vanishing Programmable Resources program.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s First Mobile Chip Is An Image Processor Hidden In The Pixel 2

One thing that Google left unannounced during its Pixel 2 launch event on October 4th is being revealed today: it’s called the Pixel Visual Core, and it is Google’s first custom system-on-a-chip (SOC) for consumer products.

You can think of it as a very scaled-down and simplified, purpose-built version of Qualcomm’s Snapdragon, Samsung’s Exynos, or Apple’s A series chips. The purpose in this case?

Accelerating the HDR+ camera magic that makes Pixel photos so uniquely superior to everything else on the mobile market.

Google plans to use the Pixel Visual Core to make image processing on its smartphones much smoother and faster, but not only that, the Mountain View also plans to use it to open up HDR+ to third-party camera apps.




The coolest aspects of the Pixel Visual Core might be that it’s already in Google’s devices. The Pixel 2 and Pixel 2 XL both have it built in, but laying dormant until activation at some point “over the coming months.”

It’s highly likely that Google didn’t have time to finish optimizing the implementation of its brand-new hardware, so instead of yanking it out of the new Pixels, it decided to ship the phones as they are and then flip the Visual Core activation switch when the software becomes ready.

In that way, it’s a rather delightful bonus for new Pixel buyers.

The Pixel 2 devices are already much faster at processing HDR shots than the original Pixel, and when the Pixel Visual Core is live, they’ll be faster and more efficient.

Looking at the layout of Google’s chip, which is dubbed an Image Processing Unit (IPU) for obvious reasons, we see something sort of resembling a regular 8-core SOC.

Technically, there’s a ninth core, in the shape of the power-efficient ARM Cortex-A53 CPU in the top left corner.

But the important thing is that each of those eight processors that Google designed has been tailored to handle HDR+ duties, resulting in HDR+ performance that is “5x faster and [uses] less than 1/10th the energy” of the current implementation, according to Google.

This is the sort of advantage a company can gain when it shifts to purpose-specific hardware rather than general-purpose processing.

Google says that it will enable Pixel Visual Core as a developer option in its preview of Android Oreo 8.1, before updating the Android Camera API to allow access to HDR+ for third-party camera devs.

Obviously, all of this tech is limited strictly to the Pixel 2 generation, ruling out current Pixel owners and other Android users.

As much as Google likes to talk about enriching the entire Android ecosystem, the company is evidently cognizant of how much of a unique selling point its Pixel camera system is, and it’s working hard to develop and expand the lead that it has.

As a final note, Google’s announcement today says that HDR+ is only the first application to run on the programmable Pixel Visual Core, and with time we should expect to see more imaging and machine learning enhancements being added to the Pixel 2.

Pleas like, share and tweet this article.

Pass it on: Popular Science

Superbugs And Antibiotic Resistance

For the last century, medical professionals and microbiologists have waged a war against germs of every type and with the breakthrough of antibiotics, changed the world in which we live.

It also changed the world for our symbionts, the 4 to 6 pounds of bacteria, fungi and viruses who have hung on to our species through thick and thin for eons of time; to them we are their movable feast.

It was indeed a war that we appeared to be winning.  We thought we were firmly living in the ‘Antibiotic-Age’ and it was here to stay for all time.

However, while we were basking in its potency, unfortunately we were also rapidly and inexplicably sowing the seeds of its demise.

In a recent landmark report, US health policy makers warn that, with mounting evidence of superbugs overcoming our antibiotics,  that our situation is extremely serious.

The report gives a glimpse of the world to come, as even now there are a dozen different drug resistant microbial species that have totally overcome our existing antibiotics.

These resistant strains are now responsible for causing 2 million infections and 23,000 deaths each year in the US alone.




According to the WHO, the rapid emergence of multi-drug resistant (MDR) strains calls for a comprehensive and coordinated response to prevent a global catastrophe.

The WHO warns that, “...many infectious diseases are rapidly becoming untreatable and uncontrollable.”

CDC director Tom Frieden says that we must take urgent action to “change the way antibiotics are used” by cutting unneeded use in humans and animals and take basic steps to prevent infections in the first place.

The tools we have at our disposal, besides tracking resistant infections, are vaccines, safe food & patient infection control practices, paired with effective and enlightened hand hygiene.

Human populations weathered numerous plagues before antibiotics were discovered. It is edifying that geneticists have found that the human genome is littered with the remnants of our past battles with pathogens.

The difference is that today we know how to effectively apply all of the preventive measures that are at our disposal.

We should keep in mind that the advent of infectious disease adapted to humans is a relatively recent phenomenon.

The ‘Post-Antibiotic Age’, if it comes, represents the ongoing evolution between a microbe and its human host, with hand & surface hygiene reigning supreme as the most effective means of preventing infection.

These elements, along with water sanitation and hygienic treatment of human waste, have formed the basis for the hygiene revolution over the last hundred years.

Within this, the discovery and development of antibiotics is perhaps the short lived apex or crowning glory of the revolution.

To rise to the challenge, we need to recognize that our bodies are complex ecological systems and the maintenance of our barrier function is critical to preventing skin infection and keeping out invading pathogens.

This is no more than an extension and further development of the original hygiene revolution, where we see the true relations between living organisms and the many elements of the environment.

Skin health is critical to maintaining hand hygiene compliance.  Hand hygiene is certainly capable of rising to the challenge, but not if skin is damaged.

In the ‘Post-Antibiotic Age’, maintaining healthy skin will be essential to preventing a wide range of infections caused by strains we helped to create.

Healthy hands are safe hands, but hand hygiene does not have to go it alone if there is a “sea-change” with respect to how agri-food producers and healthcare professionals utilize antibiotics.

CDC Director Frieden stated that, “It’s not too late,” but that there is a list of urgent and life-threatening infections that must be addressed via a more effective collaboration; they include carbapenem-resistant Enterobacteriaceae (CRE), drug resistant gonorrhea and C. difficile.

The WHO has called for the agri-food industry to take the threat of MDRs seriously and curb over use of antibiotics, particularly as it is estimated that there is at least a 1000-fold greater use of antibiotics compared to humans.

In hospitals we must embrace best antibiotic and hygiene practices to make a turn from what the Center for Global Development has called “a decade of neglect“.

We need to “Get  Smart” and set targets for reducing antibiotic use in healthcare facilities.

Let’s all appreciate the good microbial flora and fauna that exist on and in us, as without these little creatures life as we know it would not exist.

We should also recognize that the more bad bugs encounter antibiotics, the more likely they are to adapt. As Health Canada puts it, “Do bugs need drugs?“.

While antibiotics have allowed us to temporarily gain the upper hand, nothing lasts forever;  but with a holistic view of hand hygiene there is no reason why we can’t continue to improve our control of infections.

But for this to happen, there can be no excuses or compromises for effective hand hygiene practices.

Please like, share and tweet this article.

Pass it on: New Scientist