A large range of focal lengths to zoom through can dramatically increase your framing options, as well as simply make it possible to get recognisable people shots when you're trapped in the crowd, high in the bleacher seats, or at the back of the auditorium. See how these stabilised 12X zoomers stack up.
Updated: July 6, 2007
Let's get physical
Like high resolution, the term megazoom is a moving target: 6X, 8X, 10X, now 12X. Unlike the current crop of so-called high-resolution cameras, however, large zoom ranges confer clear, tangible benefits. Though many pro photographers eschew the long focal-length lens and zoom with their feet, we don't all have the luxury or comfort level for in-your-face photography. A large range of focal lengths dramatically increases our framing options, as well as simply makes it possible to get recognisable people shots when we're trapped in the crowd, high in the bleacher seats, or at the back of the auditorium. For example, a frame-filling shot of Junior on stage from 15 metres away requires a much longer focal length than you'd need if you end up in the front row, only 1.5 metres back.
There are some drawbacks, however, to these ultraconvenient lenses. For one, no single lens can deliver equal sharpness across a broad range of focal lengths; it's more likely to have a sweet spot of distances somewhere in the middle. That's why digital SLRs, with their interchangeable lenses, are so appealing and why you have to pay close attention to the maximum aperture available on any megazoom lens. The Leica lens on Panasonic's models support an aperture of f/2.8 throughout the zoom range; that's not very impressive at 36mm, but when you're shooting at 432mm, it gives you more available-light latitude than Canon's f/3.5 or Sony's f/3.7 -- about two-thirds of a stop.
Second, physics dictates that as you increase the focal length, something's gotta give to decrease the amount of light that scatters relative to the amount that focuses: the aperture has to shrink, the shutter speed has to increase, camera motion must decrease, or sharpness gets sacrificed. In practice, you can't shrink the aperture too much without falling over the other side of the curve, where the negative effects of diffraction start to overwhelm the positive effects of the smaller aperture -- or you simply run out of light. That's where you see all sorts of artifacts, such as distortion and fringing. Increasing the light sensitivity by upping the ISO setting has the same effect as changing the aperture or shutter speed, but increases visual noise. A tripod decreases the motion but that's not always a practical solution. And nobody wants a blurry photo. So traditionally, photographers have compensated by increasing shutter speed: optimally, you should never shoot at a shutter speed slower than the reciprocal of the focal length (in other words, 1/f). But that's not always feasible either, especially when you hit extreme telephoto territory, such as 400mm.
Those are the reasons why everyone emphasises the importance of some form of image stabilisation for telephoto cameras. At its best, a good stabilisation system can buy you a couple of stops; for example, at a given aperture size, you could shoot with a 500mm lens at a shutter speed as slow as 1/250 second. Not all stabilisation systems are created equal, however, and it's no longer a matter of assuming that optical image stabilisation systems (OIS) are automatically the best. In fact, the term optical stabilisation itself has become a bit of a misnomer.
In a true OIS, such as Canon's Image Stabilizer, Panasonic's Mega OIS, and Kodak's unnamed version, an inertial sensor detects motion and the camera shifts a special lens element to refocus light rays gone astray back to the middle of the optical system. In contrast, the now-defunct Konica Minolta's Anti-Shake technology compensates by moving the image sensor, in effect, changing the focal point on the image plane rather than in the optical path. (Pentax uses the same technique, though not in any megazoom camera.) Sony's Super SteadyShot operates along similar lines, but instead of physically moving the image sensor, Sony's system simply focuses on a different area of the sensor. In practice, as long as the majority of the light travels through the centre of the lens element closest to the focal plane -- and therefore hits the sensor on the perpendicular and at its most coherent -- the method of compensation shouldn't matter.
In contrast, electronic image stabilisation is always an inferior solution, because it occurs after the image has been captured. Any pixel- or algorithm-based correction degrades the image quality. That said, some methods are better than others, and the degradation isn't necessarily noticeable all the time. And, of course, it allows for lower-cost solutions.