Changing How Photographs are Taken
In recent years, a number of manufacturers have produced cameras that are capable of producing higher-resolution images through something called Sensor-Shift Technology. This technology has been made possible with the advent of in body image stabilization (IBIS). Camera designers have used the IBIS as a way to get incredible increases in image resolution or to improve the color information for the images that are taken.
There are a number of names for this technology including High-Resolution Mode, Pixel Shifting Resolution System, Pixel Shift Multi Shooting Mode or the more generic names of pixel-shift/sensor-shift but in the end, the concepts behind this technology are all the same. Multiple images of the same view are taken in such a way that the images are stacked and blended to create a single, usually large, high-resolution image.
There are strengths and weaknesses of this new technology and understanding how it works can help you make better images yourself if you have a camera that is capable of doing this.
NOTE: Because websites use lower resolution images, the images used in this article have been downsized and modified to simulate the differences between the high-resolution images and the standard output from the cameras. When looking at the images in full, the images look similar but when you get closer to the details in the images that is when you start to see the differences.
Many Approaches to Sensor-Shift Images
Sensor-shift image capture has been transformed from expensive specialty cameras to become an increasingly available feature on newer, resolution-oriented cameras. Today, in addition to Hasselblad’s monster H6D-400c (400 Megapixel images), there are offerings from Olympus, Pentax, Sony, and Panasonic.
These versions generally use the same conceptual approach but at much more accessible prices.
Who Uses Sensor-Shift?
Regardless of the manufacturer, the basic action of sensor-shift image capture remains the same. Take multiple images but move the camera’s sensor slightly for each image to capture more image data and then put the image together.
By moving the sensor around, the image color data improves allowing for more detail to be resolved by overcoming the inherent problems with color specific photosites. Ignoring the Hasselblad, the systems that use this technology include cameras such as the Olympus OM-D E-M1 Mark II (Micro Four Thirds), Pentax K-1 Mark II DSLR, Sony a7R III, and Panasonic Lumix DC-G9 (Micro Four Thirds) although there are others from the same manufacturers.
Three of these lines are mirrorless cameras with the Pentax being a crop sensor DSLR. It is interesting to note that the Panasonic/Olympus cameras take one approach and Pentax/Sony take a different approach to the same concepts.
The Olympus/Panasonic systems use an approach that makes very large high-resolution images whereas the Pentax and Sony systems use the sensor-shift to improve the color information of same size images. Both the Pentax and Sony systems also allow for the separation out of the individual sensor-shifted images whereas the Olympus and Panasonic blend the stacked images into a single photograph.
How does sensor technology work?
To understand how sensor-shift technology works you need to also understand how a sensor generally works at a very small scale. In the good old days of film photography, cameras used light-sensitive film to record images. Digital cameras use a very different approach to record light.
Digital cameras use light-sensitive photodiodes to record the light striking the sensor. In most digital cameras, each photodiode has a specific color filter (red, green, or blue), forming a photosite. These photosites are arranged to allow the light to be blended to see the color from the image coming onto the sensor.
The red, green, and blue photosites on a sensor are generally arranged in a specific pattern known as a Bayer array (a.k.a. Bayer matrix, filter). There are also other configurations such as the Fuji X-Trans sensor (used on several of their camera models) or Sigma that uses a Foveon sensor.
With a Bayer arrangement, there are twice as many green photosites as red or blue because human vision is most attuned to resolving detail in green. This arrangement generally works well but if you think about it, on an image, a color pixel is created by blending these photosites together.
The sensor does not know how much red there is on a green sensor location or a blue sensor location so interpolation is required. This can create some artifacts in photographs if you look very closely and tends to mean that RAW images have an ever so slightly soft focus. All RAW images need some sharpening in post-processing (the green, the red and the blue for a pixel are blended together).
In a regular camera without IBIS, each photosite only records the light from one color in that one spot, so the data that it records is technically incomplete. It is like a bucket that only collects light from a particular color. A cluster of light buckets in the Bayer pattern is used to create a single pixel in the digital image but within that pixel, there are two green buckets, one blue and one red.
To meld the image together and put a single color into that one pixel, the signals from the cluster of photodiodes are resolved together. The collected data is interpolated via a de-mosaicing algorithm either in-camera (jpeg) or on a computer (from a RAW image), a process that assigns values for all three colors for each photosite based upon the collective values registered by neighboring photosites.
The resulting colors are then outputted as a grid of pixels and a digital photograph is created. This is partly why RAW images have a slightly softer focus and need to be sharpened in the post-production workflow.
IBIS means that the sensors now move ever so slightly to adjust for subtle movements of a camera to keep the image stable. Some manufacturers claim that their systems are capable of stabilizing the sensor and/or lens combination for an equivalent of 6.5 stops.
This stabilization is accomplished by micro adjustments of the position of the sensor. For sensor-shift images, those same micro adjustments are used to have each photosite exposed to the light from the single image recording. In essence, the sensor is moved around not to adjust for external perturbations but to have each portion of an image contain full-color information.
Photosites Rather Than Pixels
You may have noticed the term photosites instead of pixels. Cameras are often rated by their megapixels as a measure of their resolving power, but this is confusing because cameras do not have actually have pixels only photosites.
Pixels are in the image produced when the data from the sensor is processed. Even the term “pixel-shift” which is sometimes used, is misleading. Pixels don’t move, it is the sensors that have photosites on them that move.
In single-image capture, each photosite records data for red, green, or blue light. This data is interpolated by a computer so that each pixel in the resulting digital photograph has a value for all three colors.
Sensor-shift cameras attempt to reduce the reliance on interpolation by capturing color data for red, green, and blue for each resulting pixel by physically moving the camera’s sensor. Consider a 2×2 pixel square taken from a digital photograph.
Conventional digital capture using a Bayer array will record data from four photosites: two green, one blue, and one red. Technically that means there is missing data for blue and red light at the green photosites, green data and red at the blue photosites and blue and green at the red photosites. To fix this problem, the missing color values for each site will be determined during the interpolation process.
But what if you didn’t have to guess? What if you could have the actual color (red, blue and green) for each photosite? This is the concept behind sensor-shift technology.
Consider a 2×2 -pixel square on a digital photograph that is created using pixel-shift technology. The first photo begins as normal with data recorded from the four photosites. However, now the camera shifts the sensor to move the photosites around and takes the same picture again but with a different photosite.
Repeat this process so that all the photosites have all the light for each exact spot on the sensor. During this process, light data from four photosites (two green, one red, one blue) has been acquired for each pixel, resulting in better color values for each location and less of a need for interpolation (educated guessing).
The Sony and Pentax Approach
Sony’s Pixel Shift Multi Shooting Mode and Pentax’s Pixel Shifting Resolution System operate in this manner. It is important to note that using these modes does not increase the total number of pixels in your final image. The dimensions of your resulting files remain the same, but color accuracy and detail are improved.
Sony and Pentax take four images moved one full photosite per image to create a single image. It really is simply improving color information in the image.
The Olympus and Panasonic Approach
The High-Resolution Mode of Panasonic and Olympus cameras, which both use Micro Four Thirds sensors, takes a slightly more nuanced approach, combining eight exposures taken ½ pixel apart from one another. Unlike Sony and Pentax, this significantly increases the number of pixels in the resulting image.
From a 20 megapixel sensor, you get a 50-80 megapixel RAW image. There is only a single image with no ability to access the individual images of a sequence.
What are the Advantages of Using Sensor-Shift?
Using sensor-shift technology has several advantages. By taking multiple images, knowing the color information for each photosite location and increasing the resolution you accomplish three main things. You decrease noise, reduce moire, and increase the overall resolution of the images.
Noise and Improved Resolution
By taking multiple images with a subtle change in position of the sensor, the resolution of the image goes up but so does the color information in the images. This allows similar images to allow for a greater drilling down into the image with smoother colors, less noise, and better detail.
Moire is the appearance of noise or artifact patterns that appear in images with tight regular patterns. Newer sensors tend to have fewer issues with Moire than in the past but it will still appear in some images.
The cause of the moire tends to be related to the tight patterns being recorded and the camera having problems resolving the pattern because it is having problems with the sensor photosite patterns. The color information for the Red, Green and Blue photosites have troubles with edges in these tight patterns because not all the color for a single location is recorded.
With sensor-shift, all the color for each location is there, so moire tends to disappear.
So Why Not Use This for Every Image?
Well, the main reason is that you have to take multiple images of a single scene. This means that this really doesn’t work well for moving subjects. The process requires, at a minimum, four times the exposure time of single image capture. This translates into four opportunities for a part of your composition and/or your camera to move during image capture, degrading image quality.
Such constraints limit the technology’s application to still life and (static) landscape photography. Any movement in the scene being captured is going to create a blurry or pixelated area. This is a problem for landscape photography if there is a wind moving plants or clouds as well as areas where running water is present.
This also means that usually, you need to be very stable and use a tripod, although there are some clear intentions from manufacturers to make available versions that will allow for handheld shooting of the camera (Pentax has this feature).
Quirks of some of the systems
As sensor-shift technology has been implemented in different ways and depending upon the system used, the problems are a bit different. The main quirk is that you generally need a tripod, so no run and gun.
The Sony system has other limitations that you cannot see the image until you process the four separate images together. This means you cannot review your resolved image on the camera. In addition, due to the high pixel count on the A7R mark III, any subtle movement of the tripod is particularly noticeable on the resultant image. In order to edit the images, you also need to use proprietary Sony Software to merge the images together.
Pentax has some interesting features. Using the software application that comes with the camera allows for addressing movement by using an algorithm within the software for removing movement artifacts. This works better than software commonly used for image manipulation such as Adobe.
The Olympus system has been around a while and in the most recent iteration on the Olympus OMD EM1 Mark II, any detected movement will have those affected pixels replaced with parts of one of the single regular resolution images in areas of movement. This creates uneven resolution but makes the image look better for things like wind. It also limitations particularly if there is a lot of movement. Often the images look a little pixelated.
The greatest challenge facing sensor-shift image capture is moving subjects. Additionally, trying to pair a strobe with a camera using pixel-shift image capture can be complicated by the speed of image capture, flash recycle limitations, and general compatibility problems. Manufacturers are aware of these problems and are working to resolve them.
Overall the Technology is Only Going to Get Better
More and more systems are using algorithms to produce these higher resolution images. As the technology matures, the implementations will get better and better results, potentially able to deal with movement and handheld conditions.
The advantage to manufacturers is that better quality images are produced without the need for really expensive high pixel density sensors (cheaper). The advantages to the user are that the images can have better noise and color information for better final results.
Happy hunting for that perfect high-resolution image!