Main Content

Today, we’re happy to announce the launch of Raspberry Pi Camera Module 3. Four different variants of Camera Module 3, in fact, starting at the familiar price of $25. We’ve produced Camera Modules with both visible-light and infrared-sensitive options, and with either a standard or wide field of view (FoV). And in place of the fixed-focus optics of its predecessors, Camera Module 3 provides powered autofocus — which many of you have requested — allowing you to take crisp images of objects from around 5cm out to infinity. There’s a video demo further down the page.

Once again, we’ve partnered with Sony, this time using their back-illuminated IMX708 sensor: this offers higher resolution (twelve megapixels); a larger and more sensitive pixel design; and support for high-dynamic-range imaging.

The Camera Module was our very first official Raspberry Pi accessory. Launched in 2013, it was an instant success. We followed it later that year with the NoIR infrared-sensitive variant. Camera Module 2, built around Sony’s eight-megapixel IMX219 sensor, was released in 2016, and has served us faithfully ever since, selling two million units on its way to becoming our longest-lived flagship product.

Higher resolution, bigger pixels
Compared to IMX219’s 3280×2464 (8.1 megapixel) array of 1.12μm pixels, IMX708 provides a 4608×2592 (11.9 megapixel) array of 1.40μm pixels. The higher horizontal resolution implies an ability to image finer details; the 16:9 aspect ratio allows us to capture HD video which makes use of the entire sensor area; and the larger pixels, and more modern pixel architecture, translate into higher sensitivity, and better low-light performance.

Field-of-view options
The standard-FoV variants of Camera Module 3 provide a 66° (horizontal) FoV, which is a close match to its predecessor’s 62° FoV; for these variants, IMX708’s higher linear resolution translates directly into higher angular resolution, and a more detailed picture. Here are some comparison photographs so you can see the difference between the standard and wide fields of view.

The wide-FoV variants offer a 102° (horizontal) FoV; they spread IMX708’s higher linear resolution over a larger angle, yielding a slightly lower angular resolution than Camera Module 2, but enabling interesting new applications including digital panning.

Every camera module we’ve made to date has had fixed-focus optics: a physically static set of lenses, optimised to focus at infinity, but capable of yielding a reasonably sharp image of objects as close as a metre away.

Camera Module 3 introduces powered autofocus support for the first time. The lens assembly is mounted on a voice-coil actuator, allowing us to move it backwards and forwards relative to the sensor until a selected area (by default the middle) of the scene is optimally focused. Here’s Brian with a quick demo.

To select the appropriate lens position, we use the Phase Detection Autofocus (PDAF) capabilities of the IMX708 sensor, falling back to our own Contrast Detection Autofocus (CDAF) algorithm if a high-confidence PDAF result is not available. A nice bonus of PDAF is that it allows us to run the autofocus algorithm continuously during video recording, maintaining optimal focus as the camera, and objects in the scene, move.

High dynamic range
For me, this is the most exciting feature of Camera Module 3.

An image sensor captures an image by counting the number of photons that hit each pixel during some exposure time. The choice of exposure time is important: for a dark scene we want a long exposure to capture poorly illuminated details with the best possible signal-to-noise ratio, while for a bright scene we want a short exposure to avoid saturating the sensor and blowing out the image.

For a scene with both bright and dark regions (one with high dynamic range), there isn’t necessarily a single good choice of exposure time: you’re faced with an invidious choice between blowing out the bright regions or underexposing the dark ones. High-dynamic-range (HDR) sensors like IMX708 tackle this problem by taking multiple simultaneous exposures with different exposure times. We can then select the exposure which best captures the detail in each region of the image, and apply a tone mapping process to compress the dynamic range of the result for display or storage.

The resulting images have a quarter of the resolution of a non-HDR shot, but I think the results speak for themselves.

And one more thing – High Quality Camera with M12 mount
Launched in 2020, the Raspberry Pi High Quality Camera supports interchangeable lenses conforming to the C- and CS-mount standards. Some of our industrial customers have been adding adapters to allow them to use M12-mount lenses: fisheye and other specialist lenses are more readily available in this format.

To better support these customers, today we’re also launching a variant of the High Quality Camera with a native M12 mount at $50. This eliminates the need for an adapter, and supports a much broader selection of lenses due to the increased back focus flexibility.

Camera Module 3 is compatible with all Raspberry Pi computers with CSI connectors — that is, all except Raspberry Pi 400 and the 2016 launch version of Zero. Board dimensions and mounting-hole positions are identical to Camera Module 2. Due to changes in the size and position of the sensor module, it is not mechanically compatible with the camera lid for the Raspberry Pi Zero Case.

The new Camera Module 3 is only supported by the modern libcamera software environment and by the libcamera-based Picamera2 beta under Raspberry Pi OS Bullseye, and not by the legacy closed-source camera stack – you’ll need to make sure you have the latest version of the software before you dig in, as only the latest release has autofocus support. But don’t worry, the hardware and software documentation has been updated to get you going, including the documentation around our Picamera2 library.

The Camera Module 3 hardware, and the updated High Quality Camera, were designed by Simon Martin. Naush Patuck, Nick Hollinghurst, David Plowman, Serge Schneider and Dave Stevenson wrote the software. Alasdair Allan, David Plowman, Andrew Scheller and Liz Upton worked on documentation. Austin Su led on sourcing. Jack Willis designed the packaging, and Brian O Halloran (not included with camera) took the shiny photos.

We’d like to acknowledge the assistance of Phil Holden and John Conroy at Sony, Chongqing TS-Precision Technology, and Shenzhen O-HN Optoelectronic.”

Link to article