Moore’s Law, an off-the-cuff surmise by Intel Corp.’s Gordon Moore back in the 1960s, has come to take on the same credibility as a law of physics. Laws of physics are based on the conclusions of hundreds of scientific observations and experiments, and similarly, Moore’s Law is an apt observation that has turned out to be remarkably accurate in terms of compute power, storage capacity and processing capacity.
Moore’s Law states that the number of semiconductors in an integrated circuit will double in density roughly every two years. It’s a rule-of-thumb that’s generally applied to processing and storage capacity, but it also affects any device that’s microprocessor-dependent, and edge devices like IP video cameras used in today’s surveillance installations are no exception. In fact, these devices have historically outpaced Moore’s Law.
One obvious—and necessary—improvement would be in resolution. Since the vast majority (about 99 per cent) of video is not monitored in real time, forensic applications are important, and demand a high level of resolution for identification, clothing detection, or even seeing the monetary denomination of bill being handed to a customer. But can capacity for resolution become self-defeating? The 720p high-definition camera of today, with 1.3 megapixels, could become a 40-megapixel camera by 2020, with a resolution of 6,320 lines if IP video resolution continued to follow Moore’s Law. The newest ultra-high-definition displays boast about 4,000 lines. Is a 40-megapixel camera even usable?
The answer: not right now. The huge increase in data transmission, data storage and data analysis volumes associated with a fleet of such cameras would be impractical to manage with today’s technology. That said, technology’s always on the march, and Moore’s Law will very likely take care of those issues.
But other essential camera components that are non-digital don’t benefit from Moore’s Law, unfortunately. Lens quality doesn’t double every two years, for example, and has not kept pace with image sensor and in-camera chip advances. Nor has the capacity to physically isolate a camera from vibration, which creates a blurry image. Fuzzy pixels are fuzzy pixels, no matter how many of them you have. And lastly, the more pixels a camera has, generally the more light it needs to “see.” Low light technology is improving, but high megapixel cameras – even the ones inside your smartphone – don’t perform well in the dark.
In a forensic context, more data is available to work with from a 40MP camera than a 1.3MP camera, even if pixels are competing for images in a low-light environment. But in a world where real-time, non-forensic camera data is driving business value, what can Moore’s Law do for you?
Those processing power improvement can be put to work on those problems of digital image stabilization, for a start. Unused horsepower can improve low-light image quality. There’s a move afoot to drive intelligence out to the network edge, where data analysis can take place in a “fog” environment. (It’s like a cloud, but it’s close to the ground—get it?)
Furthermore, and maybe most exciting, cameras of the future will deliver predigested data for real-time applications that aren’t all security related; they’ll be driving business decisions that affect customer traffic, pricing, promotions and more. As software development continues to improve in the surveillance world, we might soon be saying, “Have a surveillance goal? There’s an app for that.”
In those contexts, more pixels aren’t necessarily better. An image that’s usable in a real-time environment is what’s important, as horsepower is used to drive new surveillance innovations.