In part 1 we made a little detour to get ourselves some understanding about telescopes. During that research I also came across expensive camera modules dedicated to the job of astrophotography. Here is an example of such a camera:

The product above is the Bresser Explore Scientific Deep Sky Astro Camera 1.7MP. The camera has USB2/3 interfaces, a 12V power supply, a 1.7MP (1600×1100 pixels) image sensor and comes at a price tag of near € 1500. Well, nothing extraordinary you think, aside of the extreme steep price you pay for barely a hand full of pixels! I mean 1.7MP… not even my first digital camera from 20 years ago had such a low amount of pixels in its image sensor! Am I missing something here? Well indeed, the camera utilises the special image sensor made for very low-light conditions. It combines it sensitivity with low noise, but also has active Thermo-Electric Cooling (TEC) element to help improve image quality. So what you get is maybe not a whole lot pixels, but the quality of each pixel should be superb in ways of low light conditions.
So what’s so special with this image sensor that it is suppose to offer such superior low-light image quality? Well let’s dive into the details!
SONY Exmor IMX432 CMOS

The camera uses the Sony Exmor IMX432 CMOS sensor. So let’s have a look at some of it specs:
- FPS: 98.6
- Size: 1.1 inch
- Pixel size: 9.0µm x 9.0µm
- Resolution: 1.78M (1608 x 1104)
- Shutter: global
- Signal: monochrome (IMX432LLJ) or RGB (IMX432LQJ)
- illumination: front-illuminated
- CMOS
Resolution

I don’t think resolution needs a lot of explanation. The image sensor is essentially a grid of individual pixels that make up the entire pixels. The grid exists out of horizontal rows and vertical columns, therefor the image sensor resolution is not only expressed by the total pixels count but also by the number of pixels rows and columns (W x H). The resolution tells you something about the sharpness of a picture (let alone that you focused your lens correctly) and the image aspect ratio (eg is it 16:9 or rather 4:3).
Sensor technology: CMOS vs CCD
An image sensor made with either CMOS or CCD technology uses a photoelectric effect to convert light photons into electric signals. The places where this happens can be thought of as buckets that collect the light. They’re generally called pixels. The way these pixels work are however different for both technologies.

CCD stands for Charged Couple Device. The sensor is an analog device that gets charged by light. The exposure is started and stopped for all pixels at once which we technically call global shutter. The photoelectric charge of the pixel is moved into a serial shift register (SSR) that sits at a layer below the CDD layer, and then amplified and AD (analog-digital) Converted into electrical signals at the output. The charge is only converted once which is highly beneficial for the low noise generation. Until recent years CCD sensor performed really well in low light conditions or within the near-infrared range (NIR).
CMOS stands for Complementary Metal Oxide Semiconductor. The mayor difference here is that each pixel has its own amplifier, and sometimes the pixel also includes an ADC. This makes CMOS sensor more suspect to noise. On the other hand this also allows you to read multiple pixels simultaneously, SMOS is typically faster than CCD tech. One other motivator for using CMOS is that they typically are less power hungry and they mostly come at a lower cost.
Where in the past CCD was mostly preferred for good image quality in low light conditions, recent years CMOS improvements has mostly compensated its cons and the market is mostly moving in CMOS nowadays. CCDs are a dying breed and it should be of no surprise that once you start looking around for the correct image sensor for your application that you’ll mainly (if not only) find CMOS sensors.
Speed (fps)
The speed of a sensor (aka frame rate) might not be something you’ve found to be a limiting factor for most of your everyday pictures. CMOS is generally known to be faster since CCD’s must transfer the charges into the horizontal shift register. High FPS tells you that the sensor is able to get real small exposure times. Whether the picture is really useful is a different question. Frame rate absolutely matters when the objects that you’re trying to capture are moving fast across the canvas. A good example is sports where you as a photographer want to capture the moment where a F1 car passes by. With low frame rates you could end up missing the object half or even entirely, while high fps enables you to take multiple shots of the car while it passes by and therefor gives you more opportunities to get that perfect picture.

Some manufactures have been creating image sensors specifically for Machine Learning and Computer Vision applications which high FPS can definitely make a difference in allowing you to snap a good picture for your AI application. Sometimes there sensors will have larger pixels wells (more on that later) for low light conditions and a global shutter to take away any distortion effects. And that may come in handy for our astro shots!
Global vs Rolling shutter
The shutter of a camera is the thing that sits at the front and controls how much lights goes in for how long we keep collecting light.

The larger the opening diameter (aperture), the more light will fall into the camera obscura and the less long you’ll need to keep the shutter open for collecting the amount of light needed to build your picture. The less time you need to expose the camera, the less distorted the picture will be when there are moving parts to be captured. It’s entirely similar as what we learned from our telescopes. So while you want to reduce the shutter speed as much as possible, you also need to make sure you get enough light so that the picture doesn’t appear too dark.
The shutter has been a mechanical one for decades, but nowadays there is also the electronic shutter which basically ‘enables’ the image sensor to collect light, or ‘disable’ it when not needed. This can be done in different sequences which each have their pros and cons. The first sequence is the rolling shutter. With this shutter mode the exposure will start at certain lines in the sensor array and build up until the entire sensor is capturing. Similarly when the shutter ‘closes’ the sensor is blocked from being illuminated line by line until the entire array is ‘off’ again. It’s kind of like a readout ‘wave’ is sweeping across the image sensor. The benefit here is that the sensor can already start the readout for the second frame while it’s also busy reading out the first frame. You get a 100% duty cycle and therefor higher maximum frame rate. The downside here is that lines are captured in their own life cycle, the lines in the middle will start collecting light much earlier than those at the top and bottom and therefor when you have large objects moving fast across the image surface they will look distorted.

Here is another picture that tries the explain what happens with the rolling shutter:

With the global shutter this is different. The exposure begins and ends for each pixel at the exact same moment. This allow the sensor them to see everything at one shot. The downside is the time needed for charge build-up and discharge. Now there are also techniques where exposures can proceed after previous exposure are being readout from the readout nodes of a pixel. This allows the global shutter to also meet a 100% duty cycle. But since the global shutter is much easier to synchronize it often gets you practically the highest fps.

Here is a typical example that explains well enough the distortion that happens with rolling shutters and fast moving objects:

As with astrophotography the camera is preferable properly tracking the sky object and therefore the image being rather still (and not shaken) we assume it doesn’t really matter which shutter technique is being used. You don’t particularly need a global shutter unless you’re trying to capture for example the ISS that’s passing by. But it may coincidentally be the case that you get a global shutter since they’re often used in ML applications where low-light condition are often issues that have to be delt with too. Anyway the IMX432 as used in the Bresser ASTRO Camera 1.7MP has a global shutter.
Pixel size

The pixels size is something that isn’t mentioned a lot when talking about the specs of a camera. Often you need to dig a bit deeper to find what the actual pixels size is. Plus most people have more or less settled down over the years that the more pixels your image sensor has, the better the quality of the camera. Well that’s by far not entirely correct! The pixel size also plays an important role.

Sensor pixels are like buckets that collect light photons. The larger the bucket (= pixel), the more light gets captured. It’s sometimes referred to as Full Well Capacity which is the amount of charge that can be stored within an individual pixel given an operating voltage. As you can see from the above picture, an increase in pixel size of 1.5 times results in a 3.8x more charge capacity. Once the buckets are full you’ve reached the maximum capacity. This is a phenomenon called saturation. Cameras should be designed so that they use the full dynamic range of the saturation level when taking pictures. Some camera’s allow you to display the saturation charts on their display. Blooming is another phenomenon that happens when pixels are unable to hold additional charge. In this case the charge will start to spread into the neighboring pixels.

There is a wide range of image sensor out there but mostly you’ll find them to have a pixel size in between 0.5µm to 10µm. The image sensors in your smart phones tend to be around 1µm, depending on the brand and of course nowadays smart phones are equipped with more than one image sensor and we’ve seen some of the higher quality camera’s go up to 2µm or higher. Now this is something where the IMX432 sensor really shines. The pixels are 9µm in size which means they’ll collect tons of photons extra and are very light sensitive!
Image sensor size

The size if image sensor has been tied to the use case (and price range) mostly. Here is common graph to give you some idea:

As you already learned an image is made out of pixels, where pixels can differ in size, but also the amount of pixels (resolution) can differ a lot. Both combined will result in a sensor of a given size. The bigger the image sensor, the more pixels you can fit, or the larger you can make each individual pixel. It also means the more light will fall into the sensor. However the bigger the image sensor the more expensive, and it’s not even on a linear scale. Some of the reasons why smaller sensors are found in smartphones is not only about the size they can fit but also to keep the price of the device acceptable. In the high-end camera range that’s something entirely different, and that’s partially why those professional Nikon and Canon device are so bloody expensive. There is however a trend in smart phone cameras that – certainly in the higher end segment – the sensors keep on increasing over time. As an example Apple has increased the specs of the image sensor in iPhone devices from a 12MP sensor with 1.22µm pixels in the iPhone X to a 48MP sensor with 2.44µm wide pixels.

At this moment the high-end smart-phone market is shipping image sensors that exceed the size of those sometimes found in compact DSLRs! There is also a big increase in the Bill Of Material for those cameras, hence why the biggest improvements are always found in the high-end market segment.

Given all this you’ll by now understand that the IMX432 has insanely large pixels and they’re even outperforming all smartphone camera’s in low light conditions with ease. You may think the resolution is so low because it’s just a plain old chip sold for way too much money, but actually the image sensor is still not utterly tiny but measuring 17.55mm in diagonal. In comparison to the iPhone X the latter is boosting a 6.7 times higher resolution, though the IMX432 comes with 7.4 times bigger pixels sizes. The IMX432 is just something of a totally different bread than all of those DSLR and smart phone cameras out there. The IMX432 is specially made for this one purpose and that’s what also makes it so expensive.
Micro lenses
Image sensors contain electronic circuitry such as photodiodes that make the light-to-charge conversion but also wires, transistors, capacitors, … all shared within the same volume. Since the sensitivity of the pixel is largely depending on the amount of light that can fall on the pixel you now understand that the extra electronics will impact that pixel performance. The fill factor of a pixel describes the ratio of amount of light sensitive area to the total area of the pixels.
fill factor = light sensitive area of pixel / total area of pixel
For older CCDs that ratio would only be about 30%, which means 70% of the incoming light gets lost. That’s a severe big loss. CMOS sensors which carry even more electronics are performing even worse. Some of that loss of light has been compensated with micro lenses.
As you can see the micro lenses help to collect more light into the light sensitive area. This boosts the so called quantum efficiency of the pixel to around 50% to 70% for CCDs that have a fill factor of only 30%. In more recent years even high efficiencies have been reached mostly by optimising the micro lenses and therefor without changing the fill ratio. The micro lenses are however not perfect either. They filter and weaken UV-light but also the quantum efficiency is dependent on the angle of incidence.
Parasitic Light Sensitivity (PLS)
CMOS sensors as a whole, but nowadays even global shutter sensor are becoming pore popular due to industry demands. They are vastly replacing the CCDs that were offering superior pictures more than a decade ago. However there GSCMOS sensors are however affected by Parasitic light sensitivity and Dark current. Manufacturers apply specific optimizations to overcome those issues as much as possible.

The IMX432, as part of the Sony Pregius series, focuses on low dark current and low PLS characteristics while achieving high sensitivity. PLS can be reduced by lowering the incident light to the memory node (MN). Over the history of Pregius sensors there have been small optimizations both electrical and optical so that resolution could be increased while PLS could somewhat be kept at similar low levels. Going into details is beyond this blog post but you if you’re interested you should definitely take a look at the Sony Pregius series website. PLS is not a common metric to be found within a sensor’s datasheet, but for your astro stuff it’s better to be on the lookout for those that do advertise low PLS values.
Front vs back illuminated
Another introduction is the so called backside-illuminated sensor. CMOS sensors typically have more electronics and therefore lower fill factors when the pixels get packed really tight in a high resolution chip. The back illuminated sensor actually reverses the internal chip layers so that the most of the wiring and electronics sit behind the photodiodes. The term back-illuminated refers to the fact that the sensor chip is now mounted reverse and therefor seemingly illuminated from the back.

As the drawing already illustrates back-illumination drastically improves the sensitivity of the chip. For CMOS chips we’re nowadays seeing quantum efficiency of above 90%! There are however also a number of drawbacks. There is higher dark current and noise added compared to front-illuminated counterparts. There is also decreased sharpness but with the help of micro-lenses this issue can mostly be solved. The manufacturing process is more complex and therefor brings extra costs.
The IMX432 however is front-illuminated. Given its large pixel buckets and relative low resolution the choice for FSI it not really an issue. Instead the front-illumination also assures lower dark current which improves the image quality in this case. Backside illumination came to a later generation of Sony Pregius sensors where the pixel size was slightly reduced to offer higher resolutions.
Mono vs RGB
By now we know that image sensors process light photons into electrical charge. We haven’t talked about specific colors pixels yet, so far we’re mostly discussing the pure (mono) image pixels. Light typically behaves as a wave pattern and this allows us to capture and see light of very specific colors (wavelengths). Image sensors are certainly not as equally sensitive for all colors, you’ll see that they perform better or worse depending on what wavelengths it’s looking at. To build a color image in RGB (Red-Green-Blue) a technique called the Bayer filter is used. The filter only allows light of a specific color (wavelength) to pass. Therefor one pixel tells you something about the red color tone within the part of the image, while a neighboring pixel does the same but then for the green color tone. Typically a Bayer filter exists out of 50% green filters, 25% red filters and 25% blue filters. This is specifically done to imitate the human eye which is sensitive to green light.

To correctly build up the resulting image the values of multiple pixels need to be combined to give the color as we percept them. This process is typically performed in the ISP, a semiconductor block that deals with the raw-pixel data. Many camera’s will also add an IR cut filter that removes the noise from near-infrared wavelengths, but this really depends on what the camera will be used for. Sometimes you rather want to capture those waves too. The sensitivity of a RGB sensor is therefor even more complex than that of a mono color sensor.

Therefor the RGB sensor is probable the sensor type that suits us the best as they allows us to build up a similar image of how we percept the living world, in full color. For the purpose of art we can always remove the colors again afterwards using any kind of image processing software. Many camera’s and smart phones even have such features on board. But there is a trade-off here. Each pixel only gathers light of one specific color which means some information gets lost. The resolution is slightly lower, and sensitivity is impacted due to the color filters. As you can see from the above chart the QHY183M/C CMOS sensor has a mono variants that slightly outperforms the RGB variant in Quantum Efficiency. So don’t judge those mono sensors as being old-fashioned, instead they’ll also cover some part of market that really aim to use the specific properties, e.g. security cameras use mono color sensor especially for low-light conditions since in the dark most color is absent anyway. Furthermore the mono sensors are also more sensitive to IR light what makes them a good candidate to combine with nearly invisible to the human eye light sources. In other words: in the dark this cameras sees you but you on the other hand will hardly be able to spot it. The IMX432 also comes in two variants of which the color variant is used in the Bresser ASTRO Camera that we highlighted at the beginning of this article. For astrophotography the RGB sensor is an obvious choice as it allows to get that colored shot at once. It’s without doubt the best option for anyone who starts doing astro shoots. For monochrome cameras, when you want to have a color picture as end result (which you mostly want), you’re going to have to take 4 pictures (through 4 filters) but also perform post-processing. Great software such as PixInsight will tremendously help you achieving that great end result but it’s going to take you some effort and hassle. Color sensors will give you a slightly lower quality end result way quicker and is therefor the recommended option for anyone starting in astrophotography.

The picture underneath is a comparision of the the M33 galaxy, first through a color sensor and afterwards through a mono sensor (3x). Exposure is kept similar, though the mono sensor requires 3 shots so the total exposure is actually 3 times longer.

Pixel binning
Pixel binning is a technique that combines neighboring (usually 4) pixels into one larger virtual pixel. In case of 4 binning we’re typically speaking of a quad bayer filter. A quick calculation tells us that therefor a 1µm sensor could actually produce similar results of a 2µm sensor, ie more light sensitivity but also 4 x lower resolution.

The above picture shows how the Samsung ISOCELL HP1 sensor with 200MP uses pixel binning to deal with low-light conditions. The real pixels are only 0.64µm large and therefor not collecting a lot of light, though at bright conditions the insanely 200MP resolution can be obtained resulting in super sharp images. Pixel binning is rapidly finding its way into the market. However as you can see the Bayer filter is somewhat not perfectly aligned for virtually squashing adjacent pixels into one. Complicated processing comes along and as you’ll understand from this some information gets lost and therefor pixel binning is not an exact replacement for the large pixels that they’re trying to imitate, it’s more of trick to get close. Pixels binning is not typically done on low resolution sensors as basically it is something seen only in more recent years and mostly on high-res sensors. The IMX432 sensor also belongs in the category of low-res chips that due to the already very large pixels doesn’t need more pixel binning to improve the image quality as the lower resolution does have a more greater impact for this chip.
TEC cooling

While not exactly an image feature the Bresser Explore Scientific Deep Sky Astro Camera 1.7MP has onboarded another something to improve image quality. Bresser added a Thermo-Electric Cooler to their camera. TEC devices are electronic coolers that push heat from one side (cold plate) to the other side (hot plate) by applying a given amount of current. TECs are often used when the amount of heat that needs to be dissipated isn’t gigantic, and when the working area is rather compact. TECs can be found in all sort of sizes and format, and you can even stack them or put them in parallel. The biggest drawback is that they consume a considerable amount of energy. TECs are well known to be power hungry, and they’re also less efficient than phase change coolers such as your fridge.

The cooling directly reduces the dark current and therefor lowers the base noise levels. In low-light conditions the extra cooling may certainly make a difference. Since the load of the image sensor itself is not huge and given the small space to work in TECs are an excellent solution to improve the image quality of CMOS sensors. The Besser camera uses a 2-stage TEC that is able to take the sensor to about 40°C below ambient temperature! This costs however a bit of energy and therefor the Bresser camera requires a 36W power supply.
Going for cheaper

While the Bresser ASTRO deep-sky camera looks like a good candidate for low-light planetary nebula pictures, it does take a huge part out of your pockets too. It has a price tag that not many are willing to take. There are however far cheaper variants. Take the Bresser Full HD DeepSky camera. It has following specs:
- Sony Starvis IMX290 color sensor
- FPS: 120
- Size: 1.25 inch
- Pixel size: 2.9µm x 2.9µm
- Resolution: 2.1M (1936 x 1096)
- Shutter: rolling
- illumination: back-illuminated
- CMOS
The IMX290 has been at the center of many low priced camera’s intended for astro photography. For example there is also the Player One Mars-M and Svbony SV305 of which the latter is the cheapest one you’ll probable find. Let’s compare it to the Bresser ASTRO camera that comes with the Sony IMX432 sensor. We see that the IMX290 slightly bumps the resolution, although nothing noteworthy maybe. The IMX432 is undoubtedly much more sensitive than the IMX290 due to the vastly larger pixels although the IMX290 does a bit of compensation by using back-illumination. Maybe the biggest difference of all is that this so called cheaper astro-photography cam can be found for less than € 300. This is probable a lot closer to most amateurs their budget.
Camera selector
If you’re now convinced into building your own astro cam or you just want to have a look at what sensors are available on the market you may want to look at e-con Systems camera sensor selector app.
Conclusive thoughts
We’re nearing the end of this article The idea is to get you going in understanding a bit more about the mysteries behind image sensors and their often immensely expensive camera hosts. As we’ve seen there has been a lot of development over the years, camera sensors are still improving and sensors have been made for all sorts of goals like miniature cameras, astro cams, DSLRs, video cameras, computer vision camera’s, and so forth. It’s really not all about megapixels, but neither to having a large pixels. There’s a balance to be made. There isn’t a one sensor suits it all solution here, depending on your goal you’ll end up with one specific group of sensor types manufactured for that specific use. Particularly we looked at how the Sony Exmor IMX432 is very well suited for deep-sky low light photography due to its insanely large pixels and low noise levels. But it comes at a trade-off of paying premium prices and settling for lower resolutions. With image sensors there are always pros and cons: good things but also trade-offs to be made. The sensors keep on getting better but it’s not like the semiconductor industry which shows a rapid growth in transistor count each few years. A decent amount of the circuitry is still analog which holds back the rapid speed of improvements that we see with typical CPU and other computing semiconductor industries. More’s law doesn’t apply here.
I hope you found some useful info here. Although we did touch a few subjects to get you around with a basic understanding there is still lots more to discover about image sensors. Google is your friend. In the next chapter I’ll finally take the theory into practice, stay tuned!























































































