Some people are having trouble with figuring out why cameras are coming out with specific sensor sizes. In particular, we have clusters at 1" (Sony RX100, Nikon 1), m4/3 (Canon G1X, Olympus and Panasonic mirrorless), APS (Nikon DX DSLRs, Samsung NX, Sony NEX and DSLRs) and 35mm film size (Nikon FX DSLRs, Canon 5DIII/1Dx).
As I pointed out in my article on picking a size, these sensor sizes are all about a stop apart (and the crop ratios look like the inverse of an f/stop progression because of that: 2.7x, 2x, 1.5x, 1x isn't all that different from f/2.8, f/2, f/1.4, and f/1, is it?). There are some minor deviations from the pattern, some due to aspect ratio differences, but it's a good normalization of the sensor size differences.
Some have suggested that players are cooperating behind the scenes, but that's not exactly the case. They're simply all coming to same conclusions. Much like we have mostly 4-cylinder, 6-cylinder, 8-cylinder, and 12-cylinder engines in autos, we tend to get certain sizes in sensors: the in-between sizes don't give enough cost/performance differential to make them competitive. This is true at both the sensor and camera level. A one stop difference can make a tangible difference in cost (sensor price is non-linear with area) and other factors (a one stop difference can produce a smaller kit lens in the mid-range, so size/weight changes significantly). Less than a stop difference and you don't get nearly the same benefit differential.
Thus, if you create a "tweener" you don't quite get the same differentials you get from the full stop sizes. It's harder to market, too, because you can't make a clear claim from a competitive size ("we're a wee bit better than X" ;~). Canon did make one in-between sensor size, the APS-H (1.3x), which lived between APS and 35mm frame, but that was produced for a specific reason: at the time that was the largest sensor you could create on their steppers with a single pass for each layer (multiple passes require much more alignment work and have lower yields and thus substantially higher costs).
The other deep secret is that these aren't new formats. APS (DX) is almost the same as…APS. The old Advanced Photo System (APS) specification for film that was supposed to take over from 35mm in the consumer market was supported by the usual suspects (Canon, Fujifilm, Kodak, Minolta, Nikon, and others). Funny thing is, it defined three image formats (APS-C at 16.7x25.1mm, APS-H at 16.7x30.2mm, and APS-P at 9.5x30.2mm). The 1" format is close to the old 16mm film size (7.49x10.26mm, and later 7.41x12.52 and 6.15x11.66mm), and many of those lenses had bigger than necessary image circles to help reduce corner problems. As you might guess, lens designs for all these close-to sizes exist. So guess what happens when you're testing sensors early in the development process? You don't have lenses yet, so you use existing ones. Of course you'd want sensor sizes that are about the same size as lenses you already have sitting around on your workbench.
We currently have the following sensor landscape in serious cameras:
- Canon—m4/3, APS, full—2x, 1.6x, 1x
- Leica—APS, full—1.5x, 1x
- Nikon—CX (1"), DX (APS), FX (full)—2.7x, 1.5x, 1x
- Pentax—sub 1", APS—~4x, 1.5x
- Sony—1", APS, full—2.8x, 1.5x, 1x
So the big three players are spreading their sensor bets (Canon, Nikon, and Sony each have three sensor sizes in serious cameras), the rest have all mostly picked a single size, typically m4/3 or APS. The "spread" approach is generally safer, as it means you can shift your focus (pardon the pun) quickly to the size that is resonating with consumers. The fixed approach is a long-term bet, as to introduce another size in your lineup means that your established base worries about obsolescence of their existing gear.
Personally, I don't think any of these sizes is going away soon, though both ends are slightly vulnerable long-term. The smallest sensors (1") are vulnerable from the emergence of "large sensor" cell phone cameras (witness the Nokia 808 with it's larger 41mp sensor that's almost 1"*). The largest sensors (FX) are vulnerable to sensor technologies pushing the mid-size sensors clearly into the "good enough for everything I do" range for 99% of the market, leaving FX as the entrance to digital MF.
Now take what I just wrote and compare to the bets already on the table:
- 1"—2 bets
- m4/3—3 bets
- APS—7 bets
- FX—4 bets
Funny thing is, I see a lot of people writing on Web sites and in Internet forums how APS (DX) is "going away." I don't think so. The economics of sensor manufacturing will always make a sensor with over twice the area fundamentally much more expensive. So while FX will come down in price, it's unlikely that it will come down fast enough to beat the technology improvements that are driving the mid-range sensors. The Sony 16mp APS sensor is darned good. It very well may already be in that "good enough for everything I do range," and if it isn't, the Sony (and Nikon variant) 24mp APS (DX) sensor is a small cut above that. Samsung isn't very far behind that, and Fujifilm just launched a 16mp APS sensor that's marvelous in its low light performance (though slightly hampered in color smear artifacts by its current demosaic routines).
It's not just the cost of the sensor that comes down with APS, but size and weight and the cost of the lens comes down, too (all else equal). To me, APS seems like a very good place to be if you want to hold off the upward march of camera phones. And judging from the bets that have been made, many of the camera makers agree. (m4/3 is closer to DX than it is to 1", so one could argue that both the m4/3 and APS sensors are the core of the long-term market. m4/3 gives up a bit on the sensor side to gain on the lens side.)
* The Nokia 808 sensor size is one of those exceptions to the stop-apart rule. It's an odd size that's about half-way between where you'd expect a sensor to be. I suspect that this is a backwards calculation. In other words, someone determined the maximum tolerable thickness that the sensor/lens combo could be and that led to a particular, new sensor size. The Nokia approach is one you'll see a lot more of in the future: throw down as many pixels as possible and do the cropping (focal length) computationally. Downsize (41mp to 5mp) to reduce noise. To some degree, it echoes some of the design thoughts we had with the original Quickcam: just get the data over to a processor and let the processor deal with making the best of it. This is an approach that has developed into what I call computational photography: let the processor correct all lens/sensor issues and create what the user actually wants. If it's a smaller size image to put onto Facebook or the Web, you can crunch those 41mp in lots of interesting ways to make for a great final result, for instance. To some degree, we have a form of computational photography in the imaging ASICs in cameras (DIGIC, EXPEED, BIONZ, etc.), but the problem with the ASIC approach is that it is tuned for speed, so basically once a camera is designed, the computational aspects are frozen in time. There are cameras you can buy today whose ASICS were frozen four and five years ago.
The thing I see missing in the computational aspect is what I call the "slow chew." Collect data (probably more than one sample for an individual shot, including a regular and shorter exposure and a "no-light" exposure as well). Now feed that to the CPU in background cycles and just let it chew on the data in interesting ways. If you ask for an image review right away, you get the normal exposure as usual. If you ask for an image review 30 minutes from now, you get something that the camera decided was better. Of course, this would require a better processor, more memory, and more battery power to do right (plus, of course, some pretty advanced thinking on the algorithm side). In some ways my concept of slow chew mimics what we pixel peepers do in raw processing, but done right, it could go much further.