Tag: computers

How To Prevent Your Computer From Overheating (And Why It’s Important)

Keeping your computer running within safe temperatures is important, especially as the temperature rises outside. Here’s how to make sure your computer’s not overheating—and how to fix it if it is.

The cooling system of your computer is one of the most important features of the device.

Without the cooling system, the electrical components of your computer wouldn’t be able to function; overheating would damage the integral parts of what makes your computer work.

The heat has to be dissipated in order to keep everything working within safe operating temperatures.

Why an Overheated Computer Is Dangerous

Simply put, if your computer becomes too hot, it is possible to destroy and shorten the lifespan of the hardware inside your computer, leading to irreparable damage and potential data loss.

Besides losing your data, heat pecks away at your computer’s internal organs—the motherboard, CPU, and more—significantly shortening its lifespan.




Besides the most obvious reason to keep your computer cool, a hot computer will also run slower than a cooler computer.

So to prevent your computer from slowing down, make sure that it is running at a moderate or low temperature.

What Temperature Should My Computer Be Running At?

Because of the different types of computer makes and models out there, the safe temperature range your computer should run at varies.

The safe operating range depends on things like processor type, manufacturer, and other factors that make it impossible to give an answer that applies to all CPUs.

How to Check the Temperature of Your PC

Sticking your hand over your computer’s ventilation system or case isn’t an accurate way to judge how hot your computer is running.

So how do you determine how hot your system’s running? You’ve got a few options.

To check the computer’s temperature without additional software, you can check your system BIOS. Restart your computer, and on the boot screen, you should have an option to press a key (often Delete) to enter the BIOS.

Once you enter Setup, navigate the BIOS menu using the on-screen instructions. You should be able to find a menu that deals with the computer’s hardware monitors and CPU.

 

There should be a field that lists your CPU temperature. Rather not restart your computer to check the temp?

We don’t blame you. Plenty of system monitoring tools can give you a temperature read-out, like free Windows program HWMonitor, which displays the temperature of the CPU, each of the computer’s cores, video card, hard drives, along with the minimum and maximum values of each temperature.

Unfortunately, you’ll need to make sure that your hardware is supported because the program can only read certain sensors.

We’ve featured several system monitoring options in the past that can also handle these duties, like the cross-platform, previously mentioned GKrellM (Windows/Mac/Linux), system-tray friendly app Real Temp, Core Temp, and SpeedFan.

SpeedFan has the added bonus of being able to show how fast each fan is spinning, complete with RPM readings.

Please like, share and tweet this article.

Pass it on: Popular Science

Could You Kill A Robot Begging For Its Life?

When Sarah Connor growls out “You’re terminated” and crushes the relentless machine in The Terminator, we all cheer. But when the Terminator gives that thumbs up at the end of Judgment Day, after bonding with John Connor and saving his life, we shed a tear.

Even when they’re not scary or badass or sympathetic movie characters, we for some reason care about robots.

But it’s hard to say how exactly human-bot relationships compare to relationships between humans, which is why robotics professor Christoph Bartneck designed an experiment to test how we interact with “people” made from metal and plastic and circuitry.

Turns out, watching a robot beg for its life is seriously disturbing.

Bartneck’s study built on the moral quandary of HAL from 2001: A Space Odyssey. “By pulling out HAL’s memory modules the computer slowly loses its intelligence and consciousness. Its speech becomes slow and it its arguments approach the style of children.




“At the end, HAL sings Harry Dacre’s “Daisy Bell” song with decreasing tempo, signifying its imminent death,” Bartneck writes.

“The question if Bowman committed murder, although in self-defense, or simply conducted a necessary maintenance act, is an essential topic in the field of robot ethics…Various factors might influence the decision on switching off a robot. The perception of life largely depends on the observation of intelligent behavior.”

The study seeks to grapple with the issues sci-fi has been asking for decades–are robots alive? What constitutes life? But it’s not really trying to answer those questions.

Instead, the study builds on previous research that was originally conducted with computers, which was, itself, based on a simple concept in human relationships: reciprocity.

The study he refers to, carried out by Stanford professor Clifford Nass in 1996, found that participants would help a computer that had been feeding them useful answers far more than a computer programmed to be all but worthless.

 

More interestingly, they’d actually help a computer they hadn’t interacted with more than the worthless computer. It’s the scientific version of a truth we know all too well: People get pissed at machines.

But what happens when that machine has human characteristics?

The iCat robots in Bartneck’s study, like Nass’, were split into helpful and unhelpful groups, and were further divided into agreeable and non-agreeable categories.

One would be helpful and warm, the other curt when talking. They also had faces, with 13 servos moving their eyes and mouths.

At the end of the experiment, participants were asked to shut down the robots, wiping their memories; as the participants turned a dial to shut off the robots, the robots’ speech slowed down and they asked to not be turned off.

Every participant turned the robot off, but the study confirmed the theory of reciprocity: “The robots [perceived] intelligence had a strong effect on the users’ hesitation to switch it off, in particular if the robot acted agreeable.

“Participants hesitated almost three times as long to switch off an intelligent and agreeable robot (34.5 seconds) compared to an unintelligent and non agreeable robot (11.8 seconds).

“This does confirm the Media Equation’s prediction that the social rule of Manus Manum Lavet (One hand washes the other) does not only apply to computers, but also to robots.

“However, our results do not only confirm this rule, but also it suggests that intelligent robots are perceived to be more alive.

The Manus Manum Lavet rule can only apply if the switching off is perceived as having negative consequences for the robot. Switching off a robot can only be considered a negative event if the robot is to some degree alive.”

Watching the robot’s face shut down and hearing its voice slow and slur is disturbing, but we could easily imagine scientists designing a study with a more complex bot being downright horrifying. Don’t do it, Japan.

Please like, share and tweet this article.

Pass it on: Popular Science

You Probably Don’t Need A New Computer — Here’s Why

Computer shopping is fun for a select group of people and a big hassle for everybody else. There are plenty of terms you need to know, plus PC components you have to think about and computer-buying mistakes you need to avoid.

You’ll have to determine exactly what you need out of a new machine — you’ll need to think about the hardware you want and the software you need.

It’s also important to figure out when you should buy a new computer, both in terms of seasonal sales and product upgrade cycles.




But you also need to know when you really need a new computer and when you’re just itching to upgrade a machine that may work just fine for another year or two.

Read on to check out some of the reasons why you may not need a new computer as soon as you think.

You may be surprised to figure out that you actually can wait to spend all that money (and save the hassle of computer shopping for another day).

1. Your old computer is working fine

We all want the latest gadgets. They’re fun to read about and even more fun to get our hands on. But if you don’t need a ton of power to perform most of your computing tasks, chances are good your old computer is still working fine.

Checking email, editing documents, and browsing aren’t typically tasks that demand a lot of power from your computer.

Even if your computer is slower than it was when you first got it, chances are good that it isn’t sluggish enough to really slow you down.

2. You haven’t been maintaining your old computer

If your primary reason for shopping for a new computer is that your old one is too slow, then you may want to make sure that the slowness isn’t fixable.

Have you been running antivirus scans? And have you been uninstalling unneeded software? How about clearing out unneeded files to free up hard drive space?

Have you made sure that only the programs you need are starting up when you turn on the computer? And have you been keeping the operating system and all the apps you use up to date?

If you’ve been neglecting your computer, you should perform some much-needed maintenance before deciding it’s time for a new computer.

3. You can speed up your old computer

There are many reasons that your old computer may be running slow. But as it turns out, there are also some easy ways to speed up a slow computer.

You can make sure that your operating system and other software are updated. Or, you can clear out the clutter that accumulates over time.  You can also free up some hard drive space, and even check for spyware.

The point is that before you throw in the towel and give up on your old computer, it’s probably worth it to make sure that you can’t speed it up with an hour or two of maintenance.

You can even completely reinstall the operating system and start fresh with the computer you already have.

There’s one more practical reason to put off buying a new computer — to avoid the annoyance of setting up a new computer.

Unless you’re truly a computer nerd (and if you are, you probably aren’t looking for reasons to avoid buying a new computer), setup is annoying.

It can get time-consuming to do correctly and often involves uninstalling a lot of bloatware.

If you aren’t going to see much in the way of performance improvements or new functionality with a new computer, you may want to wait until the hassle of setting up a new machine is really worth it.

Please like, share and tweet this article.

Pass it on: Popular Science

Are Quantum Computers On The Verge Of A Breakthrough?

Get Brilliant at http://www.brilliant.org/answerswithjoe/ And the first 295 to sign up for a premium account get 20% off every month!

For years now, quantum computers have been just out of reach, but some exciting new developments over the last year indicate that the age of quantum computing is a lot closer than we think.

 

Check out Jason’s channel: https://www.youtube.com/channel/UCS-u…

LINKS LINKS LINKS:

D-Wave video: https://www.youtube.com/watch?v=zvfkX…

Quantum Annealing Explained: https://www.youtube.com/watch?v=UV_Rl…

Supercooled qubits: https://newatlas.com/stable-supercool…

IBM’s new Neuromorphic chip: https://www.youtube.com/watch?v=nE819…

Google Bristlecone: https://www.sciencenews.org/article/g…

Silicon based quantum chip: https://gizmodo.com/new-silicon-chip-…

Has The Age Of Quantum Computing Arrived?

Ever since Charles Babbage’s conceptual, unrealised Analytical Engine in the 1830s, computer science has been trying very hard to race ahead of its time.

Particularly over the last 75 years, there have been many astounding developments – the first electronic programmable computer, the first integrated circuit computer, the first microprocessor.

But the next anticipated step may be the most revolutionary of all.

Quantum computing is the technology that many scientists, entrepreneurs and big businesses expect to provide a, well, quantum leap into the future.

If you’ve never heard of it there’s a helpful video doing the social media rounds that’s got a couple of million hits on YouTube.

It features the Canadian prime minister, Justin Trudeau, detailing exactly what quantum computing means.




Trudeau was on a recent visit to the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, one of the world’s leading centres for the study of the field. D

During a press conference there, a reporter asked him, half-jokingly, to explain quantum computing.

Quantum mechanics is a conceptually counterintuitive area of science that has baffled some of the finest minds – as Albert Einstein said “God does not play dice with the universe” – so it’s not something you expect to hear politicians holding forth on.

Throw it into the context of computing and let’s just say you could easily make Zac Goldsmith look like an expert on Bollywood.

But Trudeau rose to the challenge and gave what many science observers thought was a textbook example of how to explain a complex idea in a simple way.

The concept of quantum computing is relatively new, dating back to ideas put forward in the early 1980s by the late Richard Feynman, the brilliant American theoretical physicist and Nobel laureate.

He conceptualised the possible improvements in speed that might be achieved with a quantum computer. But theoretical physics, while a necessary first step, leaves the real brainwork to practical application.

With normal computers, or classical computers as they’re now called, there are only two options – on and off – for processing information.

A computer “bit”, the smallest unit into which all information is broken down, is either a “1” or a “0”.

And the computational power of a normal computer is dependent on the number of binary transistors – tiny power switches – that are contained within its microprocessor.

Back in 1971 the first Intel processor was made up of 2,300 transistors. Intel now produce microprocessors with more than 5bn transistors. However, they’re still limited by their simple binary options.

But as Trudeau explained, with quantum computers the bits, or “qubits” as they are known, afford far more options owing to the uncertainty of their physical state.

In the mysterious subatomic realm of quantum physics, particles can act like waves, so that they can be particle or wave or particle and wave.

This is what’s known in quantum mechanics as superposition. As a result of superposition a qubit can be a 0 or 1 or 0 and 1. That means it can perform two equations at the same time.

Two qubits can perform four equations. And three qubits can perform eight, and so on in an exponential expansion. That leads to some inconceivably large numbers, not to mention some mind-boggling working concepts.

At the moment those concepts are closest to entering reality in an unfashionable suburb in the south-west corner of Trudeau’s homeland.

Please like, share and tweet this article.

Pass it on: New Scientist

Google’s First Mobile Chip Is An Image Processor Hidden In The Pixel 2

One thing that Google left unannounced during its Pixel 2 launch event on October 4th is being revealed today: it’s called the Pixel Visual Core, and it is Google’s first custom system-on-a-chip (SOC) for consumer products.

You can think of it as a very scaled-down and simplified, purpose-built version of Qualcomm’s Snapdragon, Samsung’s Exynos, or Apple’s A series chips. The purpose in this case?

Accelerating the HDR+ camera magic that makes Pixel photos so uniquely superior to everything else on the mobile market.

Google plans to use the Pixel Visual Core to make image processing on its smartphones much smoother and faster, but not only that, the Mountain View also plans to use it to open up HDR+ to third-party camera apps.




The coolest aspects of the Pixel Visual Core might be that it’s already in Google’s devices. The Pixel 2 and Pixel 2 XL both have it built in, but laying dormant until activation at some point “over the coming months.”

It’s highly likely that Google didn’t have time to finish optimizing the implementation of its brand-new hardware, so instead of yanking it out of the new Pixels, it decided to ship the phones as they are and then flip the Visual Core activation switch when the software becomes ready.

In that way, it’s a rather delightful bonus for new Pixel buyers.

The Pixel 2 devices are already much faster at processing HDR shots than the original Pixel, and when the Pixel Visual Core is live, they’ll be faster and more efficient.

Looking at the layout of Google’s chip, which is dubbed an Image Processing Unit (IPU) for obvious reasons, we see something sort of resembling a regular 8-core SOC.

Technically, there’s a ninth core, in the shape of the power-efficient ARM Cortex-A53 CPU in the top left corner.

But the important thing is that each of those eight processors that Google designed has been tailored to handle HDR+ duties, resulting in HDR+ performance that is “5x faster and [uses] less than 1/10th the energy” of the current implementation, according to Google.

This is the sort of advantage a company can gain when it shifts to purpose-specific hardware rather than general-purpose processing.

Google says that it will enable Pixel Visual Core as a developer option in its preview of Android Oreo 8.1, before updating the Android Camera API to allow access to HDR+ for third-party camera devs.

Obviously, all of this tech is limited strictly to the Pixel 2 generation, ruling out current Pixel owners and other Android users.

As much as Google likes to talk about enriching the entire Android ecosystem, the company is evidently cognizant of how much of a unique selling point its Pixel camera system is, and it’s working hard to develop and expand the lead that it has.

As a final note, Google’s announcement today says that HDR+ is only the first application to run on the programmable Pixel Visual Core, and with time we should expect to see more imaging and machine learning enhancements being added to the Pixel 2.

Pleas like, share and tweet this article.

Pass it on: Popular Science

You Could Be Mining This Cryptocurrency Without Knowing It

Zcash is a new virtual currency that claims to be more anonymous than bitcoin, and has garnered interest from academics, investors, and criminals.

Perhaps thanks to the latter group, hackers are allegedly installing malware on unsuspecting users’ computers that forces them to mine Zcash for the hackers’ own profit.

The malware is distributed via links for things like pirated software, according to a blog posted on Monday by Kaspersky Lab security researcher Aleks Gostev.

Once installed, it forces a person’s computer to mine Zcash—basically solving math problems for a reward in the currency—and funnels the funds back to the attacker.




According to Gostev, around 1,000 possibly infected computers have been identified. This many zombie computers mining Zcash could generate as much as $75,000 a year in income, Gostev wrote.

Downloading mining software to a PC doesn’t necessarily have severe consequences for a user’s data,” Gostev wrote me in an email.

However, it does have the effect of increasing the energy consumption level of their machine, which results in more expensive electricity bills.”

Another consequence is a heavy load on the PC’s RAM, because mining software consumes up to 90% of available memory,” he continued, “which leads to a significant performance slowdown.

According to Zooko Wilcox, founder and CEO of Zcash, the most users can do at this point is protect themselves.

Unfortunately, we have no way to prevent this kind of thing, since Zcash is an open source network, like Bitcoin, that nobody (including us) controls,” Wilcox wrote me in an email.

Our recommendation to security companies that detect this kind of activity, like Kaspersky, is that their software should alert users when potentially malicious software is detected, and give the user the option of shutting it down or, if it was deliberately installed by the user, allowing it to run.

This sort of thing isn’t unique in the world of virtual currencies. Bitcoin, for its part, has seen a number of botnet mining pools over the past several years.

Even some bitcoin alternatives, like Dogecoin, have been fertile grounds for similar attacks.

Botnet mining on these currencies has mostly died out because they were designed so that mining difficulty increases over time and the rewards continually diminish.

In this situation, even an army of regular PCs can’t compete with the specialized hardware employed by big-business miners, known as ASICs.

Wilcox contended in an email that it’s incorrect to describe non-consensual Zcash mining as a “botnet,” writing, “A botnet is where you have a controller that can deploy software automatically to a large number of compromised machines.

The potential difference for Zcash, however, is that the currency is touted by its creators as being resistant to the use of ASICs, making mining with plebeian hardware a profitable approach over the long-term.

Zcash could theoretically be mined on a smartphone.

This may make Zcash mining less resource-intensive and thus more decentralized, but, somewhat ironically, it may also have the unintended side effect of making botnet mining with malware a consistently attractive option, despite diminishing returns.

However, according to Marco Krohn, chief financial officer at cryptocurrency mining firm Genesis Mining, the current state of botnet mining on Zcash as described by Kaspersky’s Gostev isn’t of much concern.

Only if a botnet manages to infect 250,000 computers, exceeding 10 percent of the whole network’s mining power, Krohn said, would miners see any effects.

But while bigger electricity bills aren’t a problem for professional miners, the average person might not appreciate the financial strain.

According to Gostev, users should check their security software to make sure blocks legitimate software from being used for malicious purposes, which might be disabled by default.

Please like, share and tweet this article.

Pass it on: New Scientist

Inside The Weird World Of Quantum Computers

In a world where we are relying increasingly on computing, to share our information and store our most precious data, the idea of living without computers might baffle most people.

But if we continue to follow the trend that has been in place since computers were introduced, by 2040 we will not have the capability to power all of the machines around the globe, according to a recent report by the Semiconductor Industry Association.




What is quantum computing?

Quantum computing takes advantage of the strange ability of subatomic particles to exist in more than one state at any time.

Due to the way the tiniest of particles behave, operations can be done much more quickly and use less energy than classical computers.

In classical computing, a bit is a single piece of information that can exist in two states – 1 or 0. Quantum computing uses quantum bits, or ‘qubits’ instead.

These are quantum systems with two states. However, unlike a usual bit, they can store much more information than just 1 or 0, because they can exist in any superposition of these values.

A qubit can be thought of like an imaginary sphere. Whereas a classical bit can be in two states – at either of the two poles of the sphere – a qubit can be any point on the sphere.

This means a computer using these bits can store a huge amount more information using less energy than a classical computer.

Last year, a team of Google and NASA scientists found a D-wave quantum computer was 100 million times faster than a conventional computer.

But moving quantum computing to an industrial scale is difficult.

Please like, share and tweet this article.

Pass it on: New Scientist

Google’s Getting Serious About Building Its Own iPhone

Google unveiled its first custom-designed smartphone chip on Tuesday, the Pixel Visual Core, which is used in its new Pixel 2 and Pixel 2 XL smartphones.

The Pixel Visual Core enables smartphones to take better pictures using HDR+, a technology that can take clear pictures even if there’s a lot of brightness and darkness in the same shot.




One example might be taking a picture of a shadowy skyscraper against a bright blue sky.

With HDR+, you’ll be able to capture both the skyscraper and the blue sky, without bits of either washing out because of parts of the image being too bright or too dark.

While the chip exists in current phones, it isn’t yet activated, but will be in a future software release.

The Pixel 2 and Pixel XL 2 aren’t the first smartphones to offer HDR support, but Google is trying to make its photos the best using the new processor.

Google said that the Pixel Visual Core will be accessible by camera applications created by other developers, not just the built-in camera app, and that it plans to activate access to the core through software updates “in the coming months.

Please like, share and tweet this article.

Pass it on: Popular Science

Moore’s Law Is Ending – Here’s 7 Technologies That Could Bring It Back To Life

Gordon E. Moore was one of the co-founders of Intel and first proposed was came to be known as Moore’s Law, which predicted that computer power would double every 2 years.

For nearly 50 years, the industry kept pace with this prediction, but in recent years there’s been a slowdown. 2 main reasons are heat and the quantum tunneling effect that occurs at the atomic scales.

Some of the technologies that have been theorized to break through this barrier include:

Graphene processors. Graphene carries electricity far better than traditional silicon processors, but is currently very expensive to produce.

Three Dimensional Chips. Some manufacturers are experimenting with 3-D chips that combine processing and memory in one place to improve speed.

Molecular transistors. Transistors that use a single molecule to transfer electricity.

Photon transistors. These take electrons out of the process entirely and replaces them with laser beams.

Quantum computers. These long-hyped machines could perform multiple calculations at once by using the superposition of quantum particles to process information.

Protein computers. These use folding proteins to make calculations.

And finally, DNA computers. DNA is the perfect data storage device, allowing scientists to store 700 terabytes of information in only one gram. But it can also be used in logic gates and are being tested in a processing capacity.

Links:

https://phys.org/news/2016-05-graphene-based-transistor-clock-processors.html

Computerphile on the physics of computer chips

Computerphile on the end of Moore’s Law:

http://www.livescience.com/52207-faster-3d-computer-chip.html

First Functional Molecular Transistor Comes Alive

https://arstechnica.com/gadgets/2015/07/scientists-build-single-molecule-transistor-gated-with-individual-atoms/

http://news.mit.edu/2013/computing-with-light-0704

Michio Kaku on Moore’s Law

https://www.engadget.com/2016/02/26/scientists-built-a-book-sized-protein-powered-biocomputer/

Harvard cracks DNA storage, crams 700 terabytes of data into a single gram

http://computer.howstuffworks.com/dna-computer.htm

https://en.wikipedia.org/wiki/Moore%27s_law

https://www.pcper.com/news/Storage/IDF-2016-Intel-Demo-Optane-3D-XPoint-Announces-Optane-Testbed-Enterprise-Customers