Are you sick and tired of opening a new web page and being greeted by a loud, obnoxious advertisement? I sure am.
Pop-up and pop-under ads were bad enough, but now it seems like I can hardly go to a site without having a video start up with a blaring voice braying about a great diet, deal, or the like. We’re looking at you, Facebook.
We can live with ads. We can make a living from websites with ads. Even ad-blocking software, like the great open-source Ad-Block Plus, allow Acceptable Ads that don’t shove their way into my face. But, this new wave of yakety-yak ads is driving us crazy.
Fortunately, Chrome, its relatives, and Firefox enable you to stop the noise. Here’s how you do it.
The Chrome extension MuteTab gives you control over audio in all your browser tabs. While Chrome includes the built-in ability control the sound from your tabs, it can still be a pain to track down which tab is being noisy.
MuteTab makes it easy to see what tabs are talking and lets you mute all of them or just the tabs in the background. You can also set the extension to mute all tabs, background tabs, or incognito tabs by default.
I like this extension a lot.
Firefox makes blocking autoplay audio and video easy.
1) Enter “about:config” into the URL bar.
2) If you get a warning message about “This might void your warranty!” continue on.
3) Now, type “autoplay” into the search box.
4) This will bring up a preference named “media.autoplay.enabled.” Double-click it so that the preference changes to False.
Internet Explorer (IE) used to make blocking unwanted audio and video easy. All you had to do was run Tools/Safety from the menubar and switch on ActiveX Filtering.
No fuss, no muss. Unfortunately, almost all autoplay displays are now using HTML5, and IE 11 doesn’t need ActiveX to run them.
That means, if you’re using IE 11 or Edge or Apple Safari, you’re out of luck.
There is one radical method that works. Go to a noisy site, then use IE 11 menu bar, and go into Tools/Internet Options/Security. Once there, move to the Restricted site entry and then press the Sites radio button.
From here, add the website to the Restricted list. The next time you visit it, you’ll find that there’s no longer any autoplay content.
You’ll also be missing some other content, but for most sites, the main images and text will still display.
Ideal? No, but it does work.
Hopefully, web browser developers will get on the same page soon and make it easy to turn off all autoplay audio and video content. The advertisers may love it, but the rest of us hate it.
Ever since Charles Babbage’s conceptual, unrealised Analytical Engine in the 1830s, computer science has been trying very hard to race ahead of its time.
Particularly over the last 75 years, there have been many astounding developments – the first electronic programmable computer, the first integrated circuit computer, the first microprocessor.
But the next anticipated step may be the most revolutionary of all.
Quantum computing is the technology that many scientists, entrepreneurs and big businesses expect to provide a, well, quantum leap into the future.
If you’ve never heard of it there’s a helpful video doing the social media rounds that’s got a couple of million hits on YouTube.
It features the Canadian prime minister, Justin Trudeau, detailing exactly what quantum computing means.
Trudeau was on a recent visit to the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, one of the world’s leading centres for the study of the field. D
During a press conference there, a reporter asked him, half-jokingly, to explain quantum computing.
Quantum mechanics is a conceptually counterintuitive area of science that has baffled some of the finest minds – as Albert Einstein said “God does not play dice with the universe” – so it’s not something you expect to hear politicians holding forth on.
Throw it into the context of computing and let’s just say you could easily make Zac Goldsmith look like an expert on Bollywood.
But Trudeau rose to the challenge and gave what many science observers thought was a textbook example of how to explain a complex idea in a simple way.
The concept of quantum computing is relatively new, dating back to ideas put forward in the early 1980s by the late Richard Feynman, the brilliant American theoretical physicist and Nobel laureate.
He conceptualised the possible improvements in speed that might be achieved with a quantum computer. But theoretical physics, while a necessary first step, leaves the real brainwork to practical application.
With normal computers, or classical computers as they’re now called, there are only two options – on and off – for processing information.
A computer “bit”, the smallest unit into which all information is broken down, is either a “1” or a “0”.
And the computational power of a normal computer is dependent on the number of binary transistors – tiny power switches – that are contained within its microprocessor.
Back in 1971 the first Intel processor was made up of 2,300 transistors. Intel now produce microprocessors with more than 5bn transistors. However, they’re still limited by their simple binary options.
But as Trudeau explained, with quantum computers the bits, or “qubits” as they are known, afford far more options owing to the uncertainty of their physical state.
In the mysterious subatomic realm of quantum physics, particles can act like waves, so that they can be particle or wave or particle and wave.
This is what’s known in quantum mechanics as superposition. As a result of superposition a qubit can be a 0 or 1 or 0 and 1. That means it can perform two equations at the same time.
Two qubits can perform four equations. And three qubits can perform eight, and so on in an exponential expansion. That leads to some inconceivably large numbers, not to mention some mind-boggling working concepts.
At the moment those concepts are closest to entering reality in an unfashionable suburb in the south-west corner of Trudeau’s homeland.
Throw out that lame old atomic clock that’s only accurate to a few tens of quadrillionths of a second. The U.S. has introduced a new atomic clock that is three times more accurate than previous devices.
Atomic clocks are responsible for synchronizing time for much of our technology, including electric power grids, GPS, and the watch on your iPhone.
On Apr. 3, the National Institute of Standards and Technology () in Boulder, ColoNISTrado officially launched their newest standard for measuring time using the NIST-F2 atomic clock, which has been under development for more than a decade.
“NIST-F2 is accurate to one second in 300 million years,” said Thomas O’Brian, who heads NIST’s time and frequency division, during a press conference April 3.
The clock was recently certified by the International Bureau of Weights and Measures as the world’s most accurate time standard.
The advancement is more than just a feather in the cap for metrology nerds. Precise timekeeping underpins much of our modern world.
GPS, for instance, needs accuracy of about a billionth of a second in order to keep you from getting lost. These satellites rely on high precision coming from atomic clocks at the U.S. Naval Observatory.
GPS, in turn, is used for synchronizing digital networks such as cell phones and the NTP servers that provide the backbone of the internet.
Your smartphone doesn’t display the time to the sixteenth decimal place, but it still relies on the frequency standards coming from NIST’s clocks, which make their measurements while living in a tightly controlled lab environment.
Real world clocks must operate under strained conditions such as temperature swings, significant vibration, or changing magnetic fields that degrade and hamper their accuracy.
It’s important then that the ultimate reference standard has much better performance than the real world technologies.
What will we do once we reach the ability to break down time into super-tiny, hyper-accurate units? Nobody knows.
Engineers from Cornell and Honeywell Aerospace have demonstrated a new method for remotely vaporizing electronics into thin air, giving devices the ability to vanish – along with their valuable data – if they were to get into the wrong hands.
This unique ability to self-destruct is at the heart of an emerging technology known as transient electronics, in which key portions of a circuit, or the whole circuit itself, can discreetly disintegrate or dissolve.
And because no harmful byproducts are released upon vaporization, engineers envision biomedical and environmental applications along with data protection.
There are a number of existing techniques for triggering the vaporization, each with inherent drawbacks.
Some transient electronics use soluble conductors that dissolve when contacted by water, requiring the presence of moisture.
Others disintegrate when they reach a specific temperature, requiring a heating element and power source to be attached.
Cornell engineers have created a transient architecture that evades these drawbacks by using a silicon-dioxide microchip attached to a polycarbonate shell.
Hidden within the shell are microscopic cavities filled with rubidium and sodium biflouride – chemicals that can thermally react and decompose the microchip.
Ved Gund, Ph.D. ’17, led the research as a graduate student in the Cornell SonicMEMS Lab, and said the thermal reaction can be triggered remotely by using radio waves to open graphene-on-nitride valves that keep the chemicals sealed in the cavities.
“The encapsulated rubidium then oxidizes vigorously, releasing heat to vaporize the polycarbonate shell and decompose the sodium bifluoride. The latter controllably releases hydrofluoric acid to etch away the electronics,” said Gund.
Amit Lal, professor of electrical and computer engineering, said the unique architecture offers several advantages over previously designed transient electronics, including the ability to scale the technology.
“The stackable architecture lets us make small, vaporizable, LEGO-like blocks to make arbitrarily large vanishing electronics,” said Lal.
Gund added that the technology could be integrated into wireless sensor nodes for use in environmental monitoring.
“For example, vaporizable sensors can be deployed with the internet of things platform for monitoring crops or collecting data on nutrients and moisture, and then made to vanish once they accomplish these tasks,” said Gund.
Lal, Gund and Honeywell Aerospace were recently issued a patent for the technology, and the SonicMEMS Lab is continuing to research new ways the architecture can be applied toward transient electronics as well as other uses.
“Our team has also demonstrated the use of the technology as a scalable micro-power momentum and electricity source, which can deliver high peak powers for robotic actuation,” said Lal.
Fabrication of the polycarbonate shell was completed by Christopher Ober, professor of materials science and engineering, with other components of the architecture provided by Honeywell Aerospace.
Portions of the research were funded under the Defense Advanced Research Projects Agency’s Vanishing Programmable Resources program.
One thing that Google left unannounced during its Pixel 2 launch event on October 4th is being revealed today: it’s called the Pixel Visual Core, and it is Google’s first custom system-on-a-chip (SOC) for consumer products.
You can think of it as a very scaled-down and simplified, purpose-built version of Qualcomm’s Snapdragon, Samsung’s Exynos, or Apple’s A series chips. The purpose in this case?
Accelerating the HDR+ camera magic that makes Pixel photos so uniquely superior to everything else on the mobile market.
Google plans to use the Pixel Visual Core to make image processing on its smartphones much smoother and faster, but not only that, the Mountain View also plans to use it to open up HDR+ to third-party camera apps.
The coolest aspects of the Pixel Visual Core might be that it’s already in Google’s devices. The Pixel 2 and Pixel 2 XL both have it built in, but laying dormant until activation at some point “over the coming months.”
It’s highly likely that Google didn’t have time to finish optimizing the implementation of its brand-new hardware, so instead of yanking it out of the new Pixels, it decided to ship the phones as they are and then flip the Visual Core activation switch when the software becomes ready.
In that way, it’s a rather delightful bonus for new Pixel buyers.
The Pixel 2 devices are already much faster at processing HDR shots than the original Pixel, and when the Pixel Visual Core is live, they’ll be faster and more efficient.
Looking at the layout of Google’s chip, which is dubbed an Image Processing Unit (IPU) for obvious reasons, we see something sort of resembling a regular 8-core SOC.
Technically, there’s a ninth core, in the shape of the power-efficient ARM Cortex-A53 CPU in the top left corner.
But the important thing is that each of those eight processors that Google designed has been tailored to handle HDR+ duties, resulting in HDR+ performance that is “5x faster and [uses] less than 1/10th the energy” of the current implementation, according to Google.
This is the sort of advantage a company can gain when it shifts to purpose-specific hardware rather than general-purpose processing.
Google says that it will enable Pixel Visual Core as a developer option in its preview of Android Oreo 8.1, before updating the Android Camera API to allow access to HDR+ for third-party camera devs.
Obviously, all of this tech is limited strictly to the Pixel 2 generation, ruling out current Pixel owners and other Android users.
As much as Google likes to talk about enriching the entire Android ecosystem, the company is evidently cognizant of how much of a unique selling point its Pixel camera system is, and it’s working hard to develop and expand the lead that it has.
As a final note, Google’s announcement today says that HDR+ is only the first application to run on the programmable Pixel Visual Core, and with time we should expect to see more imaging and machine learning enhancements being added to the Pixel 2.
For the last century, medical professionals and microbiologists have waged a war against germs of every type and with the breakthrough of antibiotics, changed the world in which we live.
It also changed the world for our symbionts, the 4 to 6 pounds of bacteria, fungi and viruses who have hung on to our species through thick and thin for eons of time; to them we are their movable feast.
It was indeed a war that we appeared to be winning. We thought we were firmly living in the ‘Antibiotic-Age’ and it was here to stay for all time.
However, while we were basking in its potency, unfortunately we were also rapidly and inexplicably sowing the seeds of its demise.
In a recent landmark report, US health policy makers warn that, with mounting evidence of superbugs overcoming our antibiotics, that our situation is extremely serious.
The report gives a glimpse of the world to come, as even now there are a dozen different drug resistant microbial species that have totally overcome our existing antibiotics.
These resistant strains are now responsible for causing 2 million infections and 23,000 deaths each year in the US alone.
According to the WHO, the rapid emergence of multi-drug resistant (MDR) strains calls for a comprehensive and coordinated response to prevent a global catastrophe.
The WHO warns that, “...many infectious diseases are rapidly becoming untreatable and uncontrollable.”
CDC director Tom Frieden says that we must take urgent action to “change the way antibiotics are used” by cutting unneeded use in humans and animals and take basic steps to prevent infections in the first place.
The tools we have at our disposal, besides tracking resistant infections, are vaccines, safe food & patient infection control practices, paired with effective and enlightened hand hygiene.
Human populations weathered numerous plagues before antibiotics were discovered. It is edifying that geneticists have found that the human genome is littered with the remnants of our past battles with pathogens.
The difference is that today we know how to effectively apply all of the preventive measures that are at our disposal.
We should keep in mind that the advent of infectious disease adapted to humans is a relatively recent phenomenon.
The ‘Post-Antibiotic Age’, if it comes, represents the ongoing evolution between a microbe and its human host, with hand & surface hygiene reigning supreme as the most effective means of preventing infection.
These elements, along with water sanitation and hygienic treatment of human waste, have formed the basis for the hygiene revolution over the last hundred years.
Within this, the discovery and development of antibiotics is perhaps the short lived apex or crowning glory of the revolution.
To rise to the challenge, we need to recognize that our bodies are complex ecological systems and the maintenance of our barrier function is critical to preventing skin infection and keeping out invading pathogens.
This is no more than an extension and further development of the original hygiene revolution, where we see the true relations between living organisms and the many elements of the environment.
Skin health is critical to maintaining hand hygiene compliance. Hand hygiene is certainly capable of rising to the challenge, but not if skin is damaged.
In the ‘Post-Antibiotic Age’, maintaining healthy skin will be essential to preventing a wide range of infections caused by strains we helped to create.
Healthy hands are safe hands, but hand hygiene does not have to go it alone if there is a “sea-change” with respect to how agri-food producers and healthcare professionals utilize antibiotics.
CDC Director Frieden stated that, “It’s not too late,” but that there is a list of urgent and life-threatening infections that must be addressed via a more effective collaboration; they include carbapenem-resistant Enterobacteriaceae (CRE), drug resistant gonorrhea and C. difficile.
The WHO has called for the agri-food industry to take the threat of MDRs seriously and curb over use of antibiotics, particularly as it is estimated that there is at least a 1000-fold greater use of antibiotics compared to humans.
In hospitals we must embrace best antibiotic and hygiene practices to make a turn from what the Center for Global Development has called “a decade of neglect“.
We need to “Get Smart” and set targets for reducing antibiotic use in healthcare facilities.
Let’s all appreciate the good microbial flora and fauna that exist on and in us, as without these little creatures life as we know it would not exist.
We should also recognize that the more bad bugs encounter antibiotics, the more likely they are to adapt. As Health Canada puts it, “Do bugs need drugs?“.
While antibiotics have allowed us to temporarily gain the upper hand, nothing lasts forever; but with a holistic view of hand hygiene there is no reason why we can’t continue to improve our control of infections.
But for this to happen, there can be no excuses or compromises for effective hand hygiene practices.
After 101 days of traveling by plane, train, automobile, Korean warship, zipline and even robot, the Olympic torch will finally reach the site of the Winter Games in PyeongChang, South Korea.
Last Friday, a lucky honoree will use it to light the Olympic cauldron in a grand, symbolic start to the games.
While the blaze looks like any other, its origins are special: It was lit not with matches or a Zippo lighter, but with a parabolic mirror, echoing rituals from Ancient Greece.
To brush up on algebra, a parabola is a particular type of arc that is defined by the exact curvature of its sides.
Mathematically, these symmetrical curves all take some form of the equation, Y = X^2. Revolve a parabola around its axis, and you have the shape of a parabolic mirror.
Unlike most curves, which scatter incoming light in many directions, the reflected beams bounce from a parabola and all concentrate to one point, the focus.
These reflective surfaces are used in a number of devices to concentrate not only reflected light, but also sound or radio waves.
Satellite dishes, some types of microphones, reflecting telescopes and even car headlights benefit from the reflective properties of parabolic dishes.
In the case of the Olympics, when the sun shines on a parabolic dish, known to the ancient Greeks as a Skaphia or crucible, the rays all bounce off its sides and collect at one blazing hot point.
Put a piece of paper—or a gas torch—in that focal point, and you get fire.
A lone parabolic dish does a decent job heating things up, achieving temperatures of at least hundreds of degrees.
“That’s really very easy to reach,” says Jeffrey Gordon, professor of physics at Ben-Gurion University of the Negev in Israel.
Some may even be able to reach temperatures in the thousands of degrees, says Jonathan Hare, a British physicist and science communicator.
Hare has witnessed parabolic mirrors vaporize carbon, something that only happens at temps over 2,000 degrees Celsius
If conditions are absolutely ideal, light can be concentrated to match the same temperature as its source, Gordon explains. In the case of the sun, that means that the upper temperature limit when concentrating its rays is around 10,000 degrees Fahrenheit.
“No matter what you do, no matter how brilliant you are, you can never bring any object on Earth to a higher temperature [by concentrating sunlight],” says Gordon.
But, of course, conditions are never ideal. First, some of that heat is lost to the atmosphere.
Then, some is absorbed into your reflective surface, and still another fraction is scattered away due to imperfections in the mirror.
“The parabola is a good concentrator but not a perfect concentrator,” Gordon adds.
Gordon’s research is focused on pushing the limits of sun concentration to the max.
Using multiple concentrating mirrors, his lab has achieved temperatures of nearly 3,000 degrees Celsius, applying the heat for a range of feats, including a sun-powered surgical laser and a reactor for creating nanomaterials.
But now, at some truly blistering temps, he has a different problem.
“We start to destroy everything,” he says.
In the case of Olympic torch lighting, the issues are somewhat more mundane. For one, there’s the potential for clouds.
In the days leading up to the modern torch lighting ceremony at the ancient temple of Hera in Olympia, the organizers light a flame in a parabolic dish, just in case clouds obscure the sun on the day of the ceremony.
The preparedness proved useful at this year’s event, which took place on the drizzly morning of October 24, 2017.
People have practiced the concentration of the sun’s rays for thousands of years. The most famous example of solar concentration comes from 212 B.C. during the siege of Syracuse, Greece.
The Greek mathematician and inventor Archimedes used the parabolic mirror, so the story goes, to deter a fleet of approaching ships, crafting a solar death ray using panels of what was likely polished bronze.
Though there’s reason to doubt the veracity of these somewhat fantastical claims—including a failed MythBusters’ attempt to replicate the feat—the ancient Greeks did have a handle on the magic of these special curves.
The first torches used in the games were modeled after ancient designs, writes Chapoutot. Built by the Krupp Company, Germany’s largest armament producer, each one only burned for 10 minutes.
The torches used today have come a long way.
In recent years, organizers have opted for high-tech features to keep the flame lit, no matter the weather.
This year’s torch, dreamed up by Korean designer Young Se Kim, has four separate walls to ensure the flame can withstand winds up to 78 mph.
It also has a tri-layered umbrella-like cover to prevent rain from extinguishing the blaze. It can even withstand temperatures down to -22 degrees Fahrenheit thanks to its internal circulation system.
If the flame goes out en route, support is always nearby with backup fire lit by parabolic mirror to swiftly relight it. Though the flame has averted major disasters this year, its robot transporter almost tipped over.
Organizers rushed to right the bot, preserving the flame.
So during last friday’s opening ceremony, as the Olympic cauldron is lit, take a moment to appreciate the fire that roared to life under a glowing bath of concentrated rays of sunlight.
As Greek archaeologist Alexander Philadelphus described during the planning of the first torch relay, the warm glow wasn’t lit by modern mechanics, but rather came directly from Apollo, “the god of light himself.”
An Argentine mom filming her 12-year-old son fooling around with an umbrella ended up capturing his brush with death as a lightning bolt struck just feet away.
The video shows the unidentified pre-teen standing under a roof drainpipe, with water pouring out onto the umbrella.
Seconds later, he walks out into a garden in the city of Posadas, in the northeastern Argentine province of Misiones.
Then out of nowhere, a powerful bolt of lightning strikes down just steps in front of the boy — causing a nearby fence to erupt in flames.
The boy’s frightened mom, Carolina Kotur, shrieked and quickly dropped her phone.
“It was morning, I was with my daughter in the room calming her, because she is scared of lightning,” Kotur told local media.
“Then the lady who works in my house told me that my son was walking in the rain and I started filming because I was making a joke, and right next to him the lightning struck. Thank God nothing happened to him.”
Others in the region were not so fortunate during the fierce storm, Central European News reported.
Brothers Sinforiano Venialgo Vazquez, 43, and Simon Venialgo Vazquez, 41, were killed when lightning struck near their home in the Paraguayan town of San Pedro del Parana — 68 miles from where the young boy was nearly hit by the bolt.
The cause of death in both cases was electrocution, though no further details were available, according to the report.
Lightning strikes reportedly killed animals in the Santa Rosa area, on the Argentine side of the Parana River, the outlet reported.