This is part 3 of my personal dive into astro photography. In part 1 we explored various telescopes types and in part 2 learned us the basics understanding of what makes an image sensor up for a certain type of job. In this part we’ll look into some of the result I obtained by putting all of that theory into practice.
Telescope of choice
Part 1 covered how I got to the Sky-Watcher Classic 150P telescope.

This telescope has a 150mm/6 inch aperture and a 1200mm focal length. Maximum magnification is x300. The actual level of magnification depends on your eyepiece, remember that we could calculate this as following:
magnification power = telescope focal length / eyepiece focal length
I have 3 eyepieces that I can use:
- 25mm: x48 magnification
- 10mm: x120 magnification
- svbony 6mm: x200 magnification (purchased afterwards)
Camera
In my first astro shots I’ll be using my smart phone, a Samsung Galaxy S20 FE, as camera. It won’t give the best results but it comes free as I already own the device. Here are some specs:

Primary/main camera:
- Sony Exmor RS IMX555
- 12MP
- sensor size:: 1/1.76″
- 1.8μm pixels
- f/1.8 aperture lens
- focal length: 26mm
- Night Mode
- 30x Space Zoom
- field-of-view: 79°
Ultra-wide camera:
- 12MP
- sensor size:: 1/1.3″
- 1.12μm pixels
- f/2.2 aperture lens
- focal length: 13mm
- field-of-view: 123°
Telelens camera
- 8MP
- sensor size:: 1/4.4″
- 1.0μm pixels
- f/2.4 aperture lens : 3x optical zoom
- focal length: 76mm
- field-of-view: 32°
Front camera
- 32MP
- 4×4 pixel binning
- field-of-view: 80°
The exact camera sensors weren’t officially revealed by Samsung but from searching around we found a source claiming that at least for the main camera the Sony Exmor RS IMX555 is used. While obviously not the best candidate for astro photography it’s still quite a decent sensor for every day use and even late evening shots. I couldn’t however find any extra info on that image sensor.
First shot
The results of my very first shots through this telescope using the Samsung Galaxy S20 FE were however not very great (given I took them holding the cam by hand) but at least promising enough for better to come. Here is that shot:

Using a smart phone adapter
Given the improved quality of smart phone camera we’ve seen special smart phone holders that allow you to mount your smart phone onto the telescope’s eyepiece so that you can get a steady shot. You can find them very cheap and therefor made them a no-brainer for me to try out.

Here is a picture that I made of our moon during daylight:

While not utterly sharp there are some nice details to see here such as the prominent Tycho crater near the top of the picture. Underneath is another one that made, several months later:

After that the weather turned bad and I had to wait a couple of weeks before I had some time again to sit out at night. This time Jupiter as on my radar. Here are some animated gifs I generated from videos that I’ve shot of Jupiter and 4 of its moons. The videos have been cropped and therefor enlarged a bit, reduced in length and some other tweaks applied to make the animated gif acceptable in size.
The first gif is from the first video that I made in standard video mode of the Samsung camera app. You can see Jupiter and 4 of its moons. From left to right: Callisto, Ganymede, Europa, Jupiter, Io.

Jupiter unfortunately reflects too much light compared to the black background and the app its default settings couldn’t cope with that very well. But honestly also for the human eye Jupiter is a bit over saturated. Not like in the above animation though, if you look closely enough you can spot the cloud belts. Than I found out there is also a Pro mode available. The second video was also made using the Samsung camera app but now using this Professional video mode. This mode allows you to configure the ISO, ‘shutter speed’, focus, white balance and zoom level. I went for zoom level 3, ISO 100 and speed 1/30. The WB is A 4400K and focus was set manually to 8. From left to right: Ganymede, Europa, Jupiter, Io

The Pro mode works out pretty well. While on the smart phone screen Jupiter still looks pretty small, after editing it turned out as above which is pretty OK for the inexpensive setup that I’m using. The additional zoom of the Samsung smart phone makes Jupiter appear larger than I get to see it through the telescope. It also allows me to get a better view on those cloud belts that Jupiter is so famous for.
Here is picture I made using the Samsung camera app in Professional mode. Settings: ISO 50, shutter 1/45, and a bit of zoom (3 or 4). Left: Jupiter, right: Io.

I slightly bumped the zoom level on the smart phone to get to the above result. As you can see Jupiter appears bigger but I don’t have a feeling we’re getting more details. I’m not sure what kind of zoom the camera actually uses but I’m guessing it’s kind of a digital zoom. To give you some idea about the distance of this object…

I also gave the Night mode a try. It’s kind of a counterpart for Google’s astro mode. Unfortunately this mode has some issues with the fast moving pace of Jupiter across the view port. Astro mode works great for still images, which is far from what happens when you mount your smart phone on top of a telescope. Failed effort, but non the less something I wanted to share. Maybe this could have turned out better when I would have had a good equatorial mount to compensate for Earth’s rotation.

For me the Professional mode of Samsung Camera app that I used in the earlier picture worked out quite well and gave me better results than I first anticipated.
Next challenge: the stars
Here is a shot in Professional mode of the Pleiades. The stars look quite dim and we didn’t really collect enough light to make them stand out in the picture. This is already at ISO 3200 and 1/10s shutter. It’s far from the pictures you see from NASA!

Bumping the shutter speed immediately results in less sharp result as stars move fast across the view port. I also gave Google astro mode a try just to see if it could cope with the movement of the stars. Unfortunately (but not unexpectedly) it could not and the star trails are even larger here:

So I’m guessing that without an motorized EQ mount or/and a way more sensitive camera it’s not going to get much better than shooting ‘nearby’ terrestrial bodies such as the moon and planets.
Other issues
Another difficulty that I found is that pointing the telescope to a deep space objects is not a easy task when you have your smart phone hooked up. For the moon or brights planets such as Jupiter or Saturn you can easy get a very good indication using a well aligned seeker and the fine tuning the last few bits using the feedback on the smart phone screen, but for anything darker than that it’s tricky and sometimes even trial and error until you get a good shot of the target object. You no’re no longer observing directly through the telescope but only have the camera’s feedback, and cameras are mostly not sensitive enough to show even most of the stars at sub second exposure times: mostly the smart phone screen is showing nothing but its common GUI objects! This is why motorized GOTO mounts are so handy, once aligned you basically command the telescope to point to a given object in the sky and you’re good to go.
Low end off-the-shelve astro cams
Off-the-shelve astro cams would definitely be an upgrade compared to the Samsung Galaxy S20 FE’s IMX555. The low end astro cams are low in costs but I’m not entirely convinced if they’ll be that much more sensitive, it’s maybe not good enough to shoot deep-sky objects though the telescope using a low shutter time. Maybe the high-end cams can, but then again they’re not within my budget. One of such more affordable off-the-shelve astro cams is the Player One Mars-C Color camera which can be found for less then € 250 nowadays.

What about DIY?
Even though the price of the Mars-C is not out of this world, it’s still far from the budget I’d be willing to spend. In the end it’s just a very experimental hobby thing… So what’s available on the DIY market? Well the IoT market has been flooded with cheap CSI and USB cams that you can hook up to your favorite hacker board. The cams are dirty cheap but certainly not of best quality. In more recent years the Raspberry PI foundation has made some decent DIY cams available:
| Samsung Galaxy S20 FE | Raspberry Pi High Quality camera | Raspberry Pi Global Shutter camera | ArduCam SKU 2MP IMX462 (B0444) | |
| sensor | Sony Exmor RS IMX555 | Sony Exmor RS IMX477R | Sony Pregius IMX296LQR-C | Sony Starvis IMX462 |
| sensor size | 14.4mm (1/1.76″) | 7.9mm (1/2.3″) | 6.3mm (1/2.9″) | 6.46mm (1/2.8″) |
| resolution | 12 MP | 12.3 MP | 1.58 MP | 2 MP |
| pixel size | 1.8μm | 1.55μm | 3.45μm | 2.9μm |
| illumination | back | back | front | back |
| shutter | ? | rolling | global | rolling |
| mono/color | color | color | color | color |
| application | consumer cameras | consumer cameras | embedded vision | security cameras |
I’m not sure if this is a coincidence but all cameras here seems to be Sony branded. Sony Pregius sensors have been focused on embedded vision and therefor contain a global shutter and also perform quite well under low light conditions. The latest variant (4th generation) is the Pregius “S” which even features back-illuminated (BSI) CMOS. While the IMX296LQR-C does not yet contain that technology it still comes with pretty large pixels, hence why it also performs relatively good under low-light conditions. This is also the specific application that the RPI foundation had in mind. That aside there is also the Sony Starvis and more recently Starvis 2 series. Those sensors both come with BSI and also performs very well in near-infrared (NIR) light conditions which gives them a clear advantage over the traditional Pregius sensor in performing night-sky observations. The Starvis series are focused for usage in for example security camera applications but are also often found in astro cams. The recently introduced Starvis 2 features optimised pixel structures and therefor have a higher dynamic range and higher sensitivity than the previus Starvis generation. Sensors like the Sony Starvis 2 IMX585 are much sought after within the astro community. The Sony Exmor series has been evolving for more than a decade already. The Exmor series focuses on low noise high quality image sensor. Actually, the Starvis series are a subset of the Exmor R series (the fifth gen Exmor), where the R suffix stands for back-illuminated. Exmor R sensors have been build starting from 2008. Exmor RS is the next iteration of Exmor sensor which on top of BSI brings improved performance in the NIR spectrum due to the new Stacked image sensor technology. Hence the S suffix… Exmor RS was announced in 2012. While Exmor RS is great for wide range of applications but not specifically on one specific area, the Starvis series are optimized for the low-light conditions that security cameras often have to deal with.
For further details I recommend looking at following pages on the e-Con systems website who are specialists in computer vision:
- Sony Exmor vs STARVIS sensors; a detailed comparison
- Sony STARVIS vs. Sony Pregius: The ultimate image sensor comparison


I’ve been looking for easily available Starvis sensors on the internet. It seems that as far as consumers go the Starvis 2 series can not yet be found as board camera for dirty cheap. That’s why I added the slightly older IMX462 sensor in my comparison. It’s a 1st gen Starvis sensor that’s still relatively close to Starvis 2 series in performance. It has about 2.5x to 3x bigger pixels compared to the Samsung S20 FE camera which gives us a rough indication on how much more light falls into the sensor. It can for example be found in Player One Mars-C Color camera which was mentioned a bit earlier.
DIY test: Sony Starvis IMX462
On Amazon I found a camera board similar to the ArduCam IMX462, the Chinese made Innomaker CAM-MIPI462RAW. This board camera can be found for roughly € 30 which is cheap enough for my adventures.

Innomaker IMX462 sensor
The CAM-MIPI462RAW is advertised as Raspberry Pi camera and uses the CSI connector to hook into the RPI. It seems the IMX290 driver can be used to work with this camera board since it has matching registers. I hooked it up to an old Raspberry Pi 1B I had still laying around unused.

Edit the boot config (/boot/config.txt) as following:
#Camera
dtoverlay=imx462,clock-frequency=74250000
We can use libcamera to work with the image sensor, to see if the sensor has been probed:
$ libcamera-vid --list-cameras
Available cameras
-----------------
0 : imx290 [1920x1080] (/base/soc/i2c0mux/i2c@1/imx290@1a)
Modes: 'SRGGB10_CSI2P' : 1280x720 [60.00 fps - (320, 180)/1280x720 crop]
1920x1080 [60.00 fps - (0, 0)/1920x1080 crop]
'SRGGB12_CSI2P' : 1280x720 [60.00 fps - (320, 180)/1280x720 crop]
1920x1080 [60.00 fps - (0, 0)/1920x1080 crop]
We can then use libcamera-still to capture still images in .jpeg and .dng format. I made a few tests to see how the new sensor lines up to other sensors. During this test I kept the shutter speed at 1/10. Here is the command I used:
libcamera-still -n -o "pic.jpeg" --gain 0 --awbgains 0,0 --immediate --shutter=100000
First I took a shot using a IMX219 sensor. I haven’t addressed this sensor so far, but here as some specs: rolling shutter, 3280 x 2464 (8MP resolution), sensor format 1/4″, 1.12μm pixel size.

Roughly the same shot, same lighting, and same moment but now using the IMX462:

It’s remarkable how the IMX462 clearly shows a brighter result. A lot more detail are exposed in the dark. It’s no surprise given the IMX219 is of an entirely different sensor category. And while it theoretically outperforms the IMX462 in resolution as you understand by now it isn’t of much use in such low-light conditions.
I repeated that shot but now using my Samsung Galaxy S20 FE in Pro Camera mode, using that same 1/10 shutter speed and ISO 50.

And again with ISO 400:

So as you can see the IMX462 is a quit decent low light cam compared to some of the other solutions I have available. But as I also mentioned it may not be a big step forward compared to the more then decent IMX555 sensor found in the Samsung Galaxy S20 FE. Considering capturing Jupiter can already be scraped from the list, and I’m still not convinced the IMX462 is up to the task of deep space imaging, I guess we’re going to need more tricks to get any decent result out of this.
Astro software
An area that we haven’t thoroughly touched but certainly deserves more attention is software. One implementation that’s widely available and used is the astro feature found on Google and Samsung smartphone. While the implementations may differ slightly, the basis will overall mostly be the same. So what dark secrets have they coded in their camera? Well in fact Google is glad to explain a thing or two about their astrophotography implementation. I’d strongly encourage you to go through their 2019 article about Night Sight on Pixel Phones.
The idea: achieving long exposure shots by stacking multiple semi-long exposure shots together. Before you praise the smart guys at for their wonderful idea, the isn’t entirely new for the real astro guys. The image stacking technique has been used for years in advance before Google started to add it to their smartphone software. The technique avoids having to use very long exposure shots since anything more then about 15s will form trails. With sub 15s exposure times the stars will mostly remain still. If you take 15 of such pictures and stack them using software you’ll achieve a virtual exposure time of 150s which will show you more details of the night sky than you can see with the naked eye. The google software even allows to collect light for up to 4 minutes. The software will need to stack the different images together. However there is more than meets the eye here. Over the different images the stars will still have moved a bit in position due to the earth’s rotation, thus the stacking software needs to be able to exactly align all images. More trickier is that while the night sky moves across the lens throughout the night, foreground objects such as the landscape, houses and trees will not. That’s where the Night Sight mode on pixel phones really shines. It would be really neat if we could just plug in our own camera into our smart phone and let the software do its thing. Driver wise obviously this is not something we can expect at this moment. Image stacking also helps reducing noise. There is a lot of noise in images and they may become very visible when you’re shooting against a pitch dark sky. The guys at PetaPixel did a very short but clear article why image stacking helps reduce noise and recover signal. The importance of stacking is that you use different pictures stacked together and not the same picture, because in essence it’s those variations of noise that will get averages out and therefor improve image quality.
Here is an example I found on the internet of image stacking at work:

Just a quick mention along the way: Electronically Enhanced Astronomy (EAA) is a form of astronomy where the celestial objects are not observed through an eyepiece but instead indirectly by means of a camera and stacking software. Actually what we described in the above. Some in depth info can be found on the skiesandscopes.com website.
Now to the software itself… what’s the stuff to get? Spoiler alert: it’s not Photoshop! And that actually came a bit as a surprise. Well, off course you can, but dedicated astro tools focus on automating things that matter for astr shots. There are many different astro tools around that’s actually not so easy to decide on which one will work for you. Some are open-source and free, other’s are behind payment wall. Here is a small overview:
| Name | Sharpcap | Firecapture | Siril | DeepSkyStacker | Open Astro Project | ASTAP |
| Description | Planetary, lunar, solar, deep sky and EAA. Stacking. Wide camera support. | Planetary capturing, broad astro cam support, feature full. No stacking | Editing, stacking, live stacking. Went past v1 status | Primarely focused on image stacking | Planetary imaging, development seems to have dried out | Stacking and plate solving for deep sky imaging. Feature full |
| Open source | no | no | yes | yes | yes | yes |
| Platforms | Windows 7 up to 11 | Windows, Mac, linux | Windows, Mac, linux | Windows | MacOs, linux | Windows, MacOs, Linux |
| Price | normal: free pro: £ 12 / year | free | free | free | free | free |
This is just a small fragment of the many tools out there. PixInsight, another tool worth mentioning could easily have been added here, but unfortunately comes with a fee of around € 350 (incl. VAT) which is out of my budget. From the above list my first filter is that I want native linux support. That already rules out half of the software out there. From the remaining list I got mostly charmed by Siril. The website is refreshingly new, and by watching a demo on YouTube it also seemed to me that image stacking was just a matter of toggling a few buttons. What I also find interesting is that it can do the stacking live as you drop pictures in the working folder. This sort of brings it closer to Google’s Astro mode for Pixel phones. Firecapture and ASTAP are two other solutions that are popular for linux systems. They’re both integrated in Astroberry.
Stacking video
Video stacking is another technique used to improve image quality, but I only discovered it 2 months after I first started typing the first words for this article. It’s kind of like image stacking, but now using a video as source of information. This works out pretty well and is sometimes preferred of image stacking. As some people explains it well on dpreview.com forums:
“What is the pros and cons of say stacking 5 mins of video of Saturn agains say lots of 5 second photos of Saturn stacked? I’ve never stacked before but just seen a video of someone who stacked a video rather than photos. Thanks .”
“Most planetary cams are shooting at 60-120 FPS. Multiply that by 5 minutes, and then have software that auto- detects the sharpness of each frame and only chooses the very best 5-10%.”
“Dave, you do not want to stack 5 sec pictures of saturn… ever. No way you will get a sharp picture that way. As described by swim, people use video, and ‘lucky’ imaging. The idea is to shoot many frames, 1000s, as in 30frames per second or higher After a few 2 minute videos, you hope you got lucky, and some of the frames are sharp. Atmospheric turbulence is the enemy, and shooting 1000s of frames, increases the odds that some frames were captured in a calmer part of the turbulence. The software analyzes the frames pics the best ones and makes the stack. Almost everyone is doing the planets, and close ups of the moon this way, I am hoping to try it myself, as soon as the planets are out at night. Most will recommend an astro camera, but I will try with my Canon 60D, or 7D mkii. Tons of good info out there, just google lucky imaging.”
The image below shows the atmospheric turbulence at work:

The two programs that you can use for recording video files are FireCapture and SharpCap. Make sure to avoid compressed video file formats.
Personally I’ll not be focusing on video stacking this time, but it’s certainly a technique worth mentioning and maybe I give it a try in the future so I wanted at least to mention it here so that you can start exploring by yourselves.
Live image stacking
So with Siril installed I got to perform my first tests. I made a shell script that eases the process of remote triggering libcamera to make raw images in .dng format and then secure copy them of the network to my host pc. I started Siril and set it up for live stacking, monitoring the output dir of my script.
-------------[pi@192.168.0.205]-----------------
1. single shot
2. interval shots
3. start camera stream
4. clean remote disk
5. setup camera
6. exit
Select option:
But I ran into a big bummer, the software wasn’t taking any of my pictures. So after spending the evening trying different image formats and sources I found out it was a bug in the Siril software. I filed a bug report but afterwards spend some time on fixing the issue myself and giving the solution back to the community. My first attempts at stacking where disastrous. While it seemed all that simple in the video, I couldn’t get very pleasing result out of it. Maybe it was the weather… it’s been poring rain for weeks so I was forced to test stacking on my interior, and on semi cloudy nights. Also the lens has quite a bit of barrel distortion which may confuse the aligning algorithm, more on that later. I tried off-line stacking by reading the docs but still couldn’t figure out how to get a decent output. Finally I set up the camera on a nearly clean lookout with nearly nothing else in the pictures as stars, and some slightly visible clouds. I but the RPI on a tripod this time:
Using interval shots I took 30 pictures of with each 10s exposure. I’m not sure how Siril thinks this makes total of 9 minutes cumulative exposure…

Here is how one of those 30 individual images looks like:

You can clearly spot the cloud on the picture here, but also some stars can be recognized.
And here is what came out of the stacking process:

So what we see is that slightly more stars are visible, and they also stand out it bit better. The clouds that slightly block our view on roughly all pictures are mostly gone, and also the space between the stars contains less noise. At the edges you notice a bit of star trails being formed because of the aligning process. I guess not correcting the lens distortion has that effect a bit. Also having parts of the house and trees in the picture is certainly not a good idea as they come out all washed out.
Lens (barrel) distortion
The camera lens is probable too wide for my application. The current lens has following specs: Fov(Diagonal)=148 degrees. Barrel distortion gets more and more noticeable when the Fov increases and in my case it’s very noticeable!
Compared to the Samsung Galaxy S20 FE this is even wider than what Samsung labels as their Ultra Wide Camera. Actually when we compare it to the main camera on the S20 FE which is closer to a 80° field of view I clearly made a mistake with this sens. It may be OK for close up shots in applicationq such as a smart doorbell, but I instead want to capture preferable smaller parts of the night sky so I may want to change to the field-of-view closer to something a telelens has to offer.

I’m not entirely sure the stacking process can cope well with this type of distortion. Barrel distortion can be corrected in software like Gimp, Photoshop, opencv and many more, or sometimes even through dedicated hardware DSP or ISP. As you understand in both cases careful tuning or calibration must be performed. Raspberry Pi’s don’t come with a hardware support for lens correction, doing the correction on the CPU is one option but it will take some time though. Furthermore you’ll also risk some loss in detail. One workaround could be to just go for a smaller angle lens. The Arducam IMX462 for example comes with Fov(H) of 92 degrees, or even go one step further into the realm of tele-lenses. The latter are however not widely available for the S-mount (also referred to as M12 mount) of my camera.
Capturing speed
Aside of that I still would have expected a better end result though. One other thing is that the RPI 1B is really slow on capturing images. A failed command takes already 5s. A 10ms exposure takes 7.5 seconds to capture. A 1s exposure takes already 14 seconds, and for a 10s exposure the camera takes already more than a whole minute. Therefor taking the batch of 30 images took about half an hour. Luckily this happens unattended. During the time span the stars have moven quite a bit already where ideally it should have took only a minute of 5. Keeping the time span smaller will also lower the effort in software on getting everything stacked properly. I did a small optimization by storing the images in RAM, but that was only a small bonus. Having a faster Raspberry Pi could help here, but also the usage of an hardware ISP can be useful to speed up the image processing. Both things I unfortunately don’t have. Some info about libcamera ISP integrations can be found here: https://starkfan007.github.io/Gsoc-summit-work.

Many of the processing steps that the CPU performs in my case can also be performed by an ISP. The new Raspberry Pi 5 now performs part of the pipeline already and a small ISP like preprocessor based upon their RP1 chip.

Foreground vs background
Furthermore, it would tremendously help if the stacking software was able to differentiate the foreground from the background. Things in the foreground don’t move at all and require different alignment than things in the back. It would help if we could tell the stacking software what part of the image requires star aligning, and what not. This is something the google AI is trained for pretty well and leads to very good end results. Using a tele-lens or telescope with narrow field of view will also help.
Camera tuning
As of my understanding we’re also relying on a libcamera tuning for the IMX290 which may slightly differ from the IMX462. The camera calibration process is documented quite well in the official Raspberry Pi Camera Guide, but will also take some money and even more time both of which I’m not willing to spend on it. Good camera calibration will lead to beter image quality.
Image noise
When I look at one of the original picture I fed into the stacking process I also notice quite a bit of noise. Here is that:

From the stacking end result we notice this gets filtered out pretty well. I’m still surprised we get this amount of noise that, I would have expected better results from a camera sensor that states to be “low noise high sensitivity”. Hardware design does play a role here. Sensors are sensitive to ripple on the power supply and a proper ripple filter could always helps to improve the image quality here. There is a small amount of filtering on the back of the sensor board though so I’m guessing there isn’t much use in trying to improve this area.

Cooling
Camera sensors are sensitive to light, and image quality improves once you start cooling the camera. You can already see an effect when you put the camera outside during freezing cold nights.


One way camera’s are often cooled is by using a Thermo Electric Cooler (TEC):


TECs come in various sizes and have a wide gamma of operating voltages and cooling power making them very applicable for cooling CMOS sensors. The downside however is that TECs by themselves are not very efficient compared to phase change cooling, the hot side of the TEC has to be cooled properly, dealing with both the sensor’s heat as the TEC’s heat. If one TEC does not suit your purpose you can also stack TECs, but know that the only makes the entire thing even harder to control. And then there is also moisture… Once you get below the dew point moisture is something that you need to take into account as condensation will form quite fast on various places in your camera body. Although I do have some electronics available and also TECs laying around unused, for now I’m going to try to avoid it since I’ll probable not be able to take very long exposure shots anyway. If you’re a DIY’er like me I can recommend following web pages:
- https://www.webastro.net/forums/topic/175929-refroidissement-diy-peltier-pour-zwo-asi/
- https://mallincam.tripod.com/id22.html
- https://www.cloudynights.com/topic/560368-risingtech-cheap-imx224-imx290-icx829-alternatives/page-4
- https://www.cloudynights.com/topic/876214-active-cooling-kit-for-zwo-asi-uncooled-cameras/
- https://rk.edu.pl/en/simple-and-non-invasive-cooling-asi120m-camera/
- https://www.thingiverse.com/thing:1014805
Lens mountings
Lenses come it all sorts and sizes and same can be told about cameras. Hence there is no universal one-fits-it-all mounting that makes everything compatible. However, things have kind of standardised over the years and we now have mountings that are commonly used over different brands, making everything a bit more interchangeable. Here are a few well used mounting options that I need to take into account.
- M12 (S-mount): this is the smallest lens mount option and therefor also the cheapest. This mounting option is commonly used on various camera boards and are particular interesting for webcam, security cams and such because the mounting and lenses are compact.

- CS and C: roughly the same mounting but with different flange focal distange. Used with bigger and more high quality lenses.


- 1.25″: typically used on telescope lens mount. This is the one I’m going to need to adapt to when I’m going to fit the Raspberry Pi on my telescope

With that we now have an understanding of how we can fit the Pi camera onto the telescope: a M12 to 1.25″ adapter. We could print them ourselves, however the cost of it would almost always match those of the cheap adapters that you can find on Amazon. Along the way I also learned that the material of the adapter also plays an important role, you don’t want to go with material that’s too much reflective. So that’s another reason to go for off-the-shelve adapters. I specifically went for the EBTOOLS 1.25″ M12 x 0.5 T Ring Telescope Mount Adapter:


Camera board with M12 adapter fixed to Raspberry Pi:

Telescope mounted pics
Well we know the IMX462 is a bit more sensitive to light then the results I’ve obtained with the Samsung S20’s main camera, but non-the-less we’re not going to be performing long exposure shots on the telescope since the IMX462 is also not sensitive enough to capture stars and nebulas with fast shutter speeds. Due to the bad weather we’ve been having for month it took me a long time to finally go outside with the mounted IMX462. Finally on a cold winter night I had my first play with the new camera but due to the absence of the moon I directly gave Jupiter a shot.

Auwch, that’s a horrible picture! I don’t know what went wrong here but I found it impossible to get the focus correct. In video mode it was as if Jupiter was on fire, with artifacts all over the place. Okay, I may lower the exposure here a bit, I agree, but optically things don’t really look that well.
After spending some time to check whether I can get any half decent out of the camera during daylight I went back to give astro shots another chance. This time the moon was up and it’s a far easier target to shoot as it requires only very small exposures.

Again, but I slightly tweaked the focus a bit more and also recorded in RAW format:

Okay, this is starting to look like something. Maybe not all that nice, the picture is still a bit unsharp even when I really gave it my best to get it focused well. There are also a lot of visual artifacts in that image, notice the horizontal lines in the bottom corner, especially in the first attempt. Here is another attempt at Jupiter:

You can clearly notice how the image is sharper than the first attempt. I also increased the shutter speed a bit to reduce the overexposed Jupiter surface. However adapting the shutter more than what I used in the above picture didn’t result in a better picture.
Next I gave the Orion Nebula (M42) a try. Now due to the focus not entirely correct it’s again all smudgy, but you can see the some contours already of the Nebula.

500ms is really about the maximum I can set the shutter to before star trail artifacts become visible. In order to capture something of the nebula the gain had to be increased to 20 or above. There is a lot of visible horizontal banding noise (hbn) in this image, but we already saw those in the moon pictures too. The higher gain values however makes them stand out a bit more here.
I had only 2 pictures taken during that timeframe, but I tried to sack them anyway using Siril. I had to apply a severe translation to align them properly so maybe half of the image didn’t get stacked at all and I had to seriously crop the end-result. I also applied a de-noising and banding de-noising filter.

Okay, it didn’t really improve the image quality that much, but some small gains are obtained non-the-less. For now I’m still not very impressed by the end-result, but I do feel like I’m still progressing.
What can we learn from off-the-shelve astro cams?
Companies such as ZWO, Svbony and Player One have been dominating the market of affordable off-the-shelve astro cameras for years now, so it be worth investigating what’s under the hood there. Only issue is that I don’t own such a device myself, so I had to search around on the internet f someone else who documented the process. What I noticed is that the camera sensors in use isn’t really top secret for those camera vendors. In contrary they seem to even highlights what sensor that they use, so that the customer with some technical background (which probable most have anyway) will have some food for comparison and understanding. Also the mechanical design is here and there mentioned, but I’m more looking into the hardware that they have in place. I’m assuming that they have a cost optimized but still low latency design in place, so it’s really interesting how that compares to the Raspberry Pi’s that are found in many hobbyist projects. I couldn’t get my hands on a step be step teardown, but fortunately I stumbled upon following picture of someone who did a cooling job on a Svbony sv705c camera with IMX585 camera.


The Winbond W631GG6NB-12 chip at the far right side is a 128 DDR3 RAM chip, nothing special there just some way of storing things fast along their way outside the camera. The 2 other chips are a bit harder to read what they’re labelled with, but at least from the one in the middle we can clearly see it’s labeled with Trion. This didn’t immediately ring a bell for me, but a quick google lookup brought me to the Efinix website. The Efinix Trion chips are actually FPGAs focussed for usage with MIPI CSI cams. They have a wide range of control interfaces (I2C, UART, SPI, …) and output interfaces (LCD, LED) and can directly interface with the Winbond DDR3 memory. From what I can read we have a Trion T35F324 chip here which currently sell on Digikey for prices between €20 and €30. Typically usage for these FPGA’s:

… so this is actually the very core of the camera here! It directly takes the Bayer data from the camera sensor and performs image processing upon it via it’s programmable ISP. The third chip, the one at the top, isn’t clearly captured in the shot we found on the internet. I’m assuming it’s some kind of interface chip to USB, or maybe micro controller manages the various settings and such and is in control of everything.
Another example, the ZWO ASI 224 MC uses a Lattice FPGA. The XP2 DVKM V1.2 mainboard (not the one in the picture below) hosts particularly a Lattice LFXP2 FPGA, a Toshibe TLP291-4 octo-coupler (nothing sexy there) and a Infineon CYUSB3014 SuperSpeed USB Controller with on-board ARM CPU.


Other brands are equally less open about their internals, where Svbony and ZWO are relying on an FPGA I’m quite sure each brand will have their own strategy on how to achieve good and speedy images. It’s assumable that the implementation will even vary depending on the model of camera, even within the same brand. In general, for which I also include non-astro, many flexible ISP solutions rely on FPGAs. For example you may also check out the solutions of helion-vision.com.
Other inspiring projects
I’m obviously not the only one to slab a Raspberry Pi to their telescope. I found several others who tried to give it a shot, but most of those projects date from few years back where the availability off retail CSI camera modules was less scars and the official RPi cams were not the greatest either for astro-photography. In more recent years some attempts have been made to utilize the RPI HQ Camera with better results.
- https://terramex.neocities.org/astro/ : an image gallery of astro pics taking with the HQ cam. The guy clearly knows how to get great astro shots, the results are really amazing!
- https://www.hackster.io/news/hubble-pi-is-a-raspberry-pi-based-astrophotography-camera-63f7c3b03a17 : small description of the Hubble-Pi project.
- https://github.com/RemovedMoney326/Hubble-Pi : the sources of the previously mentioned Hubble-Pi
- https://adambaskerville.github.io/tabs/astro/ : another hobbyist project running a GUI over VNC
Some of these projects as a solution run the GUI on the Pi itself either directly using an LCD or remotely via VNC. I didn’t want to go that way and keep it simple in such way that it’s just a bash script with little dependencies, you only need to have libcamera and ssh working. The network interface can be ethernet, or in my case Wifi (client mode). The script basically shoots the libcamera commands as if you would call them manually, but at the ease of selection menu options instead of typing it all out. After some time playing around I would say that maybe GUI application fits better here to control all little things at a click of the mouse instead of navigating through the cli menu. The ultimate solution would be if we could just shove it somehow into existing solutions so that we get features for free and remove or at least reduce maintenance.
Another very nice website to check out is:
- https://landingfield.wordpress.com/ : builds his own low-light camera using an FPGA and DLSR camera sensors. Very recommended to check out, very tech savvy.
To quote one of his conclusions: ”FPGA still provides the flexibility that we want. And in some cases designing the data paths to suit mission requirements“
And you may also like this forum thread:
- https://forum.radxa.com/t/connect-arducam-hq-camera-to-rock5b/12325 : Connect Arducam HQ Camera to ROCK5B
- https://stargazerslounge.com/topic/399998-why-should-you-buy-dedicated-astronomy-camera-and-not-to-try-to-invent-your-own: Also tried to hook up the Sony IMX462 for astro purposes
- https://github.com/aaronwmorris/indi-allsky: All-night camera software support many cameras (including RPI) through INDI
Conclusive thoughts
With reaching out those other projects for other to explore I feel like reaching the end of my 3-stage article on “Astrophotography from a beginners perspective”. During the several months that I was working on this project (mostly in late night evenings) I feel like having gained some beginners insights in astro-photography, but maybe also a little bit about photography as a whole. I don’t want advertise this 3-part introduction as the definitive guide though as I feel that some details may not be 100 percent accurate, but also that there is much more to explore and details to grasp. See it more as my personal journey through getting to know a bit about the ins and outs of taking nice night sky pictures.
If I need to take any conclusions than those would be the following:
- For a first telescope a Dobsonian is good to start with if you only care about low exposure shots of the moon and maybe some planets.
- For long exposure shots you definitely need a motorized EQ-mount. Dobsonian and alt-az mounts may also work but are more rare.
- If you get any decent size telescope don’t cheap out on the mounting: if you can’t get a stable scope you won’t get to see any night sky objects either.
- With telescopes, mostly the bigger, the better you’ll be able to capture deep-sky objects. But even sub-500 scopes should be good enough to show you something, and also give you some spectacular views on the moon and planets of our solar system.
- There are several photo editing software for various OS. Try to experiment with some of these yourself and see what works for you.
- There is a whole spectrum of image sensors out there. Sensors are build for various purposes, and thus only some of them will fit well for astro-purposes. Mostly the bigger the pixel size, the more sensitive. And high sensitivity is needed for deep-sky. Nowadays other techniques such as BSI also further enhance the sensor sensitivity. So its not only about pixel size, but neither about the amount of mega pixels. You should consider the sensor as a whole and carefully look at all of its specs.
- Astro cams may look expensive, but it’s actually pretty hard to reach similar image quality with retail DIY tools. For most people the off-the-shelve solution will work out best. However if you want to experiment than off course going DIY is way more rewarding.
- A smart-phone attached to your scope works out quite well for bright objects such as the moon and planets (Jupiter and Saturn), you don’t need an expensive astro cam to capture those and it’s really cheap.
- Don’t try to shoot astro pics from your hand, the end result will most definitely suck.
- Clear skies with low light pollution definitely makes a big difference.
Where this is the last chapter of my introduction into astro photography, it won’t be last thing I ever do with my camera and telescope. I’ll keep on experimenting further for as long as I’m intrigued and hopefully I’ll be able to keep on sharing some info every now and then. I hope you enjoyed it!


