Tag: cameras

This Isn’t The End Of Printed Photos, It’s The Golden Age

As a society, we now produce more photographs than ever before, and the total number is becoming difficult to fathom. This year, it is estimated that billions of humans armed with smartphones will take some 1.2 trillion pictures.

Many of them will be shared on social media, but many more will simply be forgotten. A few good selfies will flash before your eyes as you swipe left or right on them, late some Friday night.

But hardly any will make the transition into the physical world, bits becoming blots of ink that coalesce into an image on a piece of paper, canvas, wood, or metal — a print.

The reasons for this are rational, and there’s no point fighting progress, but nor should we ignore the value of a print. We may no longer print every photo by default, but this can actually be a good thing for printing.

It is now about quality rather than quantity, and the pictures we choose to print deserve the best treatment.

Honestly, there has never been a better time to print than now, thanks to technological advances in both digital cameras and inkjet printers.

If you haven’t yet tried your hand at photo printing, you owe it to yourself to do so, even if you’re just a casual photographer.




Print isn’t dead — it’s better than ever

It’s a common refrain in the digital age, and not just in reference to photography. Print is dead, or at least dying, right? In truth, a certain type of print has certainly declined, but this isn’t a tragedy.

Prints used to be the only way we had to view our photos. We’d drop our film off at the drugstore and pick it up 24 hours later not because it was a better system, but because it was all we had.

We tend to romanticize the print, but when printing was the norm, many photos were still lost and forgotten (and some were found again).

Most were destined for photo albums or shoeboxes that would sit around and collect dust until moving day. If fewer were forgotten, it was because fewer were made.

Far fewer, in fact — in 2000, Kodak announced 80 billion pictures had been taken that year.

Sure, that sounds like a lot (it was a new milestone at the time), but for those who think of such large numbers as vague clouds of zeros, consider that 80 billion is still 1.12 trillion shy of 2017’s 1.2 trillion photos.

For the mathematically disinclined, let’s put it another way: Subtracting the total number of photos made in the year 2000 from those made in 2017 would have no effect on the number of shirtless mirror selfies posted by lonely men on Tinder.

With so many photos being taken, it’s no wonder so relatively few are being printed. Every print costs money, after all, so of course people aren’t going to print 1.3 trillion photos.

What’s more, the point of printing (often the point of taking a photo in the first place) was to share your memory with someone else.

Now that we don’t need prints to do that, it makes sense that people are choosing not to spend money on them, especially when electronically sharing images also happens to be much more convenient.

But people still love prints. Even the “low end” of printing is alive and well as instant photography has seen a huge resurgence in recent years.

Polaroid Originals has built an entire brand around it, and Fujifilm Instax cameras and film packs made up six of the top ten best selling photography products on Amazon last holiday season.

Please like, share and tweet this article.

Pass it on: Popular Science

Nikon Confirms New Full-Frame FX Mirrorless Cameras And Lens Mount

It’s official: Nikon will soon launch a full-frame mirrorless camera system with a brand a new lens mount.

In a press release, it announced that it’s developing a “next-generation full-frame (Nikon FX-format) mirrorless camera and Nikkor lenses, featuring a new mount,” adding that “professional creators around the world have contributed to the development.

As expected, it’s also working on an adapter that will let you use existing full-frame Nikon F-Mount DSLR lenses with the cameras.

Nikon hinted that the new mount would allow it to make the lenses and cameras slimmer and smaller.

The new mirrorless camera and Nikkor lenses that are in development will enable a new dimension in optical performance with the adoption of a new mount,” says the press release.




Nikon is just confirming what we already strongly suspected, considering that yesterday, its European division unveiled a teaser video with shadowy glimpses of the camera.

It also set up a website called “In Pursuit of Light,” which had the apparent launch date of the camera (August 23rd) hidden in the HTML code.

However, Nikon has yet to confirm the specs, date and price, or even shown an official image of it yet. More details will reportedly come on a dedicated website at a later date.

For the rest of the story, we’re relying on sites like Nikon Rumors, which have been pretty accurate up to this point.

Nikon will supposedly release two cameras, a $4,000 48-megapixel model, and a $2,500, 25-megapixel “budget” version.

Those compare roughly to Sony’s 42.4-megapixel A7R III and the 24-megapixel A7 III, though both Nikon models would be more costly and have higher resolution.

Nikon and Canon are under extreme pressure to catch Sony in the mirrorless category. Both companies are way, way late to the game, so Nikon will have to at least match Sony’s current models to have any kind of a chance.

The $2,000 A7 III, for one, is a stellar performer, and there are 63 native FE lenses for it, while Nikon is starting from scratch with its own system.

The adapter will help, but could degrade optical and mechanical performance compared to native lenses.

Please like, share and tweet this article.

Pass it on: Popular Science

Huawei Says Three Cameras Are Better Than One With P20 Pro Smartphone

Huawei’s latest flagship smartphone is the P20 Pro, which has not one, not two, but three cameras on the back.

The new P20, and the larger, more feature-packed P20 Pro, launched at an event in Paris that indicated the Chinese company is looking to match rivals Apple and Samsung and elevate the third-largest smartphone manufacture’s premium efforts.

The P20 has a 5.8in FHD+ LCD while the larger P20 Pro has a 6.1in FHD+ OLED screen, both with a notch at the top similar to Apple’s iPhone X containing a 24-megapixel selfie camera.

They both have a fingerprint scanner on the front but no headphone socket in the bottom.

The P20 and P20 Plus are also available in pink gold or a blue twilight gradient colour finish that resembles pearlescent paint found on some cars – a first, Huawei says, for a glass-backed smartphone.




The P20 has an improved version of Huawei’s Leica dual camera system, which pairs a traditional 12-megapixel colour camera to a 20-megapixel monochrome one, as used on the recent Mate 10 Pro.

But the P20 Pro also has a third 8-megapixel telephoto camera below the first two, producing up to a 5x hybrid zoom – which Huawei says, enables the phone to “see brighter, further, faster and with richer colour”.

When I first heard that Huawei’s new flagship device was going to have three rear-facing cameras I was sceptical,” said Ben Wood, chief of research at CCS Insight.

But it feels like the company has added meaningful features rather than gimmicks, including the five-times telephoto zoom, excellent low light, long exposure performance and crisp black and white pictures the dedicated monochrome lens offers.

Huawei has also improved its built-in AI system for the camera, which recognises objects and scenes, pre-selecting the best of 19 modes for the subject.

Huawei’s AI will also help people straighten photos and zoom in or out to assist with composing group shots.

The company is also pushing its new AI-powered stablisation for both photos and videos, which Huawei says solves the problem of wobbly hands in long-exposure night shots.

Please like, share and tweet this article.

Pass it on: Popular Science

Google Clips: A Smart Camera That Doesn’t Make The Grade

Picture this: you’re hanging out with your kids or pets and they spontaneously do something interesting or cute that you want to capture and preserve.

But by the time you’ve gotten your phone out and its camera opened, the moment has passed and you’ve missed your opportunity to capture it.

That’s the main problem that Google is trying to solve with its new Clips camera, a $249 device available starting today that uses artificial intelligence to automatically capture important moments in your life.

Google says it’s for all of the in-between moments you might miss when your phone or camera isn’t in your hand.




It is meant to capture your toddler’s silly dance or your cat getting lost in an Amazon box without requiring you to take the picture.

The other issue Google is trying to solve with Clips is letting you spend more time interacting with your kids directly, without having a phone or camera separating you, while still getting some photos.

That’s an appealing pitch to both parents and pet owners alike, and if the Clips camera system is able to accomplish its goal, it could be a must-have gadget for them.

But if it fails, then it’s just another gadget that promises to make life easier, but requires more work and maintenance than it’s worth.

The problem for Google Clips is it just doesn’t work that well.

Before we get into how well Clips actually works, I need to discuss what it is and what exactly it’s doing because it really is unlike any camera you’ve used before.

At its core, the Clips camera is a hands-free automatic point-and-shoot camera that’s sort of like a GoPro, but considerably smaller and flatter.

It has a cute, unassuming appearance that is instantly recognizable as a camera, or at least an icon of a camera app on your phone.

Google, aware of how a “camera that automatically takes pictures when it sees you” is likely to be perceived, is clearly trying to make the Clips appear friendly, with its white-and-teal color scheme and obvious camera-like styling.

But of those that I showed the camera to while explaining what it’s supposed to do, “it’s creepy” has been a common reaction.

One thing that I’ve discovered is that people know right away it’s a camera and react to it just like other any camera.

That might mean avoiding its view when they see it, or, like in the case of my three-year-old, walking up to it and smiling or picking it up.

That has made it tough to capture candids, since, for the Clips to really work, it needs to be close to its subject.

Maybe over time, your family would learn to ignore it and those candid shots could happen, but in my couple weeks of testing, my family hasn’t acclimated to its presence.

The Clips’ camera sensor can capture 12-megapixel images at 15 frames per second, which it then saves to its 16GB of internal storage that’s good for about 1,400 seven-second clips.

The battery lasts roughly three hours between charges.

Included with the camera is a silicone case that makes it easy to prop up almost anywhere or, yes, clip it to things. It’s not designed to be a body camera or to be worn.

Instead, it’s meant to be placed in positions where it can capture you in the frame as well.

There are other accessories you can buy, like a case that lets you mount the Clips camera to a tripod for more positioning options, but otherwise, using the Clips camera is as simple as turning it on and putting it where you want it.

Once the camera has captured a bunch of clips, you use the app to browse through them on your phone, edit them down to shorter versions, grab still images, or just save the whole thing to your phone’s storage for sharing and editing later.

The Clips app is supposed to learn based on which clips you save and deem “important” and then prioritize capturing similar clips in the future.

You can also hit a toggle to view “suggested” clips for saving, which is basically what the app thinks you’ll like out of the clips it has captured.

Google’s definitely onto something here. The idea is an admirable first step toward a new kind of camera that doesn’t get between me and my kids. But first steps are tricky — ask any toddler!

Usually, after you take your first step, you fall down. To stand back up, Google Clips needs to justify its price, the hassle of setting it up, and the fiddling between it and my phone.

It needs to reassure me that by trusting it and putting my phone away, I won’t miss anything important, and I won’t be burdened by having to deal with a lot of banal captures.

Otherwise, it’s just another redundant gadget that I have to invest too much time and effort into managing to get too little in return.

That’s a lot to ask of a tiny little camera, and this first version doesn’t quite get there. To live up to it all, Clips needs to be both a better camera and a smarter one.

Please like, share and tweet this article.

Pass it on: Popular Science

Megapixels Don’t Matter Anymore. Here’s Why More Isn’t Always Better.

For years, smartphone makers have been caught up in a megapixel spec race to prove that their camera is better than the next guy’s.

But we’ve finally come to a point where even the lower-end camera phones are packing more megapixels than they need, so it’s getting harder to differentiate camera hardware.

Without that megapixel crutch to fall back on, how are we supposed to know which smartphone has the best camera?

Well thankfully, there are several other important specs to look for in a camera, and it’s just a matter of learning which ones matter the most to you.




Why Megapixels Don’t Matter Anymore

The term “megapixel” actually means “one million pixels,” so a 12-megapixel camera captures images that are comprised of 12,000,000 tiny little dots.

A larger number of dots (pixels) in an image means that the image has more definition and clarity, which is also referred to as having a higher resolution.

This might lead you to believe that a camera with more megapixels will take better pictures than a camera with fewer megapixels, but that’s not always the case.

The trouble is, we’ve reached a point where all smartphone cameras have more than enough megapixels.

For instance, a 1080p HD TV has a resolution of 2.1 megapixels, and even the highest-end 4K displays top out at 8.3 megapixels.

Considering that nearly every smartphone camera has a double-digit megapixel rating these days, your photos will be in a higher resolution than most screens can even display.

Simply put, you won’t be able to see any difference in resolution between pictures taken by two different smartphone cameras, because most screens you’ll be viewing them on aren’t capable of displaying that many megapixels.

Really, anything greater than 8.3 megapixels is only helpful for cropping. In other words, if your phone takes 12-megapixel photos, you can crop them by roughly 50%, and the resolution will still be just as high as a 4K TV.

Pixel Size Is the Real Difference Maker

The hot new number to judge your phone’s camera by is the pixel size. You’ll see this spec listed as a micron value, which is a number followed by the symbol “µm.”

A phone with a 1.4µm pixel size will almost always capture better pictures than one with a 1.0µm pixel size, thanks to physics.

If you zoomed in far enough on one of your photos, you could see the individual pixels, right? Well, each of those tiny little dots was captured by microscopic light sensors inside your smartphone’s camera.

These light sensors are referred to as “pixels” because, well, they each capture a pixel’s worth of light. So if you have a 12-megapixel camera, the actual camera sensor has twelve million of these light-sensing pixels.

Each of these pixels measures light particles called photons to determine the color and brightness of the corresponding pixel in your finished photo.

When a bright blue photon hits one of your camera’s light sensors, it tells your phone to make a dot with bright blue coloring.

Put twelve million of these dots together in their various brightness and colors, then you’ll end up with a picture.

A Little Aperture Goes a Long Way

The next key spec to look for is the camera’s aperture, which is represented as f divided by a number (f/2.0, for example).

Because of the “f divided by” setup, this is one of those rare specs where a smaller number is always better than a larger one.

To help you understand aperture, let’s go back to pixel size for a second.

If larger pixels mean your camera can collect more light particles to create more accurate photos, then imagine pixels as a bucket, and photons as falling rain.

The bigger the opening of the bucket (pixel), the more rain (photons) you can collect, right?

Well aperture is like a funnel for that bucket. The bottom of this imaginary funnel has the same diameter as the pixel bucket, but the top is wider—which means you can collect even more photons.

In this analogy, a wider aperture gives the photon bucket a wider opening, so it focuses more light onto your camera’s light-sensing pixels.

Image Stabilization: EIS vs. OIS

With most spec sheets, you’ll see a camera’s image stabilization technology listed as either EIS or OIS. These stand for Electronic Image Stabilization and Optical Image Stabilization, respectively.

OIS is easier to explain, so let’s start with that one. Simply put, this technology makes it to where your camera sensor physically moves to compensate for any shaking while you’re holding your phone.

If you’re walking while you’re recording a video, for instance, each of your steps would normally shake the camera—but OIS ensures that the camera sensor remains relatively steady even while the rest of your phone shakes around it.

In general, though, it’s always better to have a camera with OIS.

For one, the cropping and stretching can reduce quality and create a “Jello effect” in videos, but in addition to that, EIS has little to no effect on reducing blur in still photos.

Now that you’ve got a better understanding about camera specs, have you decided which smartphone you’re going to buy next?

If you’re still undecided, you can use our smartphone-buyer’s flowchart at the following link, and if you have any further questions, just fire away in the comment section below.

Please like, share and tweet this article.

Pass it on: Popular Science

Google’s First Mobile Chip Is An Image Processor Hidden In The Pixel 2

One thing that Google left unannounced during its Pixel 2 launch event on October 4th is being revealed today: it’s called the Pixel Visual Core, and it is Google’s first custom system-on-a-chip (SOC) for consumer products.

You can think of it as a very scaled-down and simplified, purpose-built version of Qualcomm’s Snapdragon, Samsung’s Exynos, or Apple’s A series chips. The purpose in this case?

Accelerating the HDR+ camera magic that makes Pixel photos so uniquely superior to everything else on the mobile market.

Google plans to use the Pixel Visual Core to make image processing on its smartphones much smoother and faster, but not only that, the Mountain View also plans to use it to open up HDR+ to third-party camera apps.




The coolest aspects of the Pixel Visual Core might be that it’s already in Google’s devices. The Pixel 2 and Pixel 2 XL both have it built in, but laying dormant until activation at some point “over the coming months.”

It’s highly likely that Google didn’t have time to finish optimizing the implementation of its brand-new hardware, so instead of yanking it out of the new Pixels, it decided to ship the phones as they are and then flip the Visual Core activation switch when the software becomes ready.

In that way, it’s a rather delightful bonus for new Pixel buyers.

The Pixel 2 devices are already much faster at processing HDR shots than the original Pixel, and when the Pixel Visual Core is live, they’ll be faster and more efficient.

Looking at the layout of Google’s chip, which is dubbed an Image Processing Unit (IPU) for obvious reasons, we see something sort of resembling a regular 8-core SOC.

Technically, there’s a ninth core, in the shape of the power-efficient ARM Cortex-A53 CPU in the top left corner.

But the important thing is that each of those eight processors that Google designed has been tailored to handle HDR+ duties, resulting in HDR+ performance that is “5x faster and [uses] less than 1/10th the energy” of the current implementation, according to Google.

This is the sort of advantage a company can gain when it shifts to purpose-specific hardware rather than general-purpose processing.

Google says that it will enable Pixel Visual Core as a developer option in its preview of Android Oreo 8.1, before updating the Android Camera API to allow access to HDR+ for third-party camera devs.

Obviously, all of this tech is limited strictly to the Pixel 2 generation, ruling out current Pixel owners and other Android users.

As much as Google likes to talk about enriching the entire Android ecosystem, the company is evidently cognizant of how much of a unique selling point its Pixel camera system is, and it’s working hard to develop and expand the lead that it has.

As a final note, Google’s announcement today says that HDR+ is only the first application to run on the programmable Pixel Visual Core, and with time we should expect to see more imaging and machine learning enhancements being added to the Pixel 2.

Pleas like, share and tweet this article.

Pass it on: Popular Science

Google’s Getting Serious About Building Its Own iPhone

Google unveiled its first custom-designed smartphone chip on Tuesday, the Pixel Visual Core, which is used in its new Pixel 2 and Pixel 2 XL smartphones.

The Pixel Visual Core enables smartphones to take better pictures using HDR+, a technology that can take clear pictures even if there’s a lot of brightness and darkness in the same shot.




One example might be taking a picture of a shadowy skyscraper against a bright blue sky.

With HDR+, you’ll be able to capture both the skyscraper and the blue sky, without bits of either washing out because of parts of the image being too bright or too dark.

While the chip exists in current phones, it isn’t yet activated, but will be in a future software release.

The Pixel 2 and Pixel XL 2 aren’t the first smartphones to offer HDR support, but Google is trying to make its photos the best using the new processor.

Google said that the Pixel Visual Core will be accessible by camera applications created by other developers, not just the built-in camera app, and that it plans to activate access to the core through software updates “in the coming months.

Please like, share and tweet this article.

Pass it on: Popular Science