The ABCs of camera phone technology

How good is the camera that comes with your new smartphone? We explain the technology and what you should look for.

Camera phones have come a long way since the first sub-megapixel models of the late 1990s and early 2000s. Aside from being more broadly available -- there's barely a phone sold today that doesn't have a camera -- they've also become much less of a novelty and much more useful as cameras.

Thanks to general advances in camera technology, which benefited both standalone cameras and phones, a good camera phone today takes pictures that are on a par with a previous generation of point-and-shoots. Camera phones now can have double-digit megapixel counts, genuine optical zoom and the ability to shoot true HD video. It's gotten to the point where, according to Flickr's metadata harvesting, the #1 camera overall among Flickr's users is the one on the iPhone 4.

In this article, I look at the key technologies that have allowed camera phones to become good enough to be the default way many users take pictures. But I'll also examine what it is that still sets apart a full-blown camera -- whether it's a pocket-size point-and-shoot or a professional-level DSLR -- from its smaller phone-based brethren.

Image sensors: CMOS vs. CCD

The core technology of any digital camera is the same, regardless of how it's packaged: a lens, an image sensor and image-processing hardware. A camera phone has to cram all of this into a space that's usually about the size of a dime.

Phone manufacturers can opt for one of two major image sensor technologies: charge-coupled devices (CCDs) and complementary metal oxide semiconductors (CMOS).

CCD image sensors, the more mature and established of the two technologies, pipe a signal from each pixel in the sensor to a single (analog) output, which is then processed in separate circuitry. This way, more of the silicon can be used for image capture, as opposed to image processing. The overall image quality is higher, but at the cost of greater power drain. As a result, there have been very few phones that have used CCD; one exception was the Sharp Aquos Shot 933SH, which was never released in the U.S.

With CMOS, a legacy technology recently adapted to imaging, each pixel sensor performs its own light-to-signal conversion and then passes the resulting digital information to other signal-processing circuitry on the same die. Because CMOS packs more functionality into a single chip, it's easier to integrate into other systems -- such as phones -- and requires less energy.

CMOS's big drawback, however, is the "rolling shutter" problem. Because the image sensor acquires its image by scanning line-by-line -- instead of all at once -- anything in extremely fast motion (for instance, a helicopter propeller) will be distorted in bizarre ways. These limitations show up most profoundly when shooting video, but they can mar still images as well. Software can compensate for these problems to some degree, but can't eliminated them entirely.

That said, CMOS is being continuously improved in ways that make it more useful in phones. Consider "back-side illumination," which increases CMOS light sensitivity by placing the sensor-to-sensor wiring in the CMOS behind the sensors rather than in front of them. The iPhone 4 camera uses this technology, and sensors made by Toshiba and Sony now use it as well.

Lenses: Why megapixels alone aren't enough

Anyone who's followed the evolution of conventional digital cameras couldn't help but notice how most of the conversation seems to be dominated by talk of megapixels. Granted, the more pixels in the sensor, the bigger the native resolution of the image. But the quality of the image fed to the sensor depends on another, far more fundamental camera technology: the lens.

Camera phone lenses are constrained by the size of the phone, so phone makers have made up for this in one of two ways: creating better sensors (as described above), and creating more advanced lens technologies.

Most of us are familiar with the basic ground-glass lens, where glass is first cast in a basic structure and then machine-tooled into a more specific shape. These lenses still yield the best quality, despite the cost and the manufacturing effort. A second method, injecting polymer into a metal mold, allows for rapid production but at lower quality. The fixed-focus lenses on low-grade camera phones typically use polymer lenses.

Wafer scale lens
A sample wafer scale lens. The entire package includes a lens and a sensor in a space of only a few millimeters. Image courtesy Alps Electric Co. Ltd.

A third approach, the wafer-scale lens, uses some of the same silicon-wafer manufacturing techniques used for microprocessors. The results are still not quite of the same grade as full-blown glass, but they allow the sensor to be packaged into a much smaller space. They can be found in a few phones such as India's OliveSmart V-S300 and the older Nokia 2330.

Camera phone software

Most camera phones ship with a default picture-snapping app, which traditionally does little more than take very basic photos. As a result, third-party camera apps that let users improve, edit and/or share their images have been developed to fill the void. Most recently famous, thanks to its acquisition by Facebook, is Instagram, a photo-snapping app with some funky filtering and useful sharing options (for both iOS and Android users).

Any photo app is going to be limited by two things: the phone's own hardware, and the way the phone's operating system makes that hardware available to applications. It may be possible to get around some of those limitations by rooting or jailbreaking the phone, but that's not always practical.

Consequently, photo apps can improve picture quality by only so much, so most third-party camera apps instead supply photo-management, image-adjustment and picture-taking features -- in other words, ways to make the process more convenient.

1 2 3 Page 1
Page 1 of 3
It’s time to break the ChatGPT habit
Shop Tech Products at Amazon