Astrophotography from a beginners perspective, part 2: cameras and sensors

In part 1 we made a little detour to get ourselves some understanding about telescopes. During that research I also came across expensive camera modules dedicated to the job of astrophotography. Here is an example of such a camera:

Image courtesy of Bresser.com

The product above is the Bresser Explore Scientific Deep Sky Astro Camera 1.7MP. The camera has USB2/3 interfaces, a 12V power supply, a 1.7MP (1600×1100 pixels) image sensor and comes at a price tag of near € 1500. Well, nothing extraordinary you think, aside of the extreme steep price you pay for barely a hand full of pixels! I mean 1.7MP… not even my first digital camera from 20 years ago had such a low amount of pixels in its image sensor! Am I missing something here? Well indeed, the camera utilises the special image sensor made for very low-light conditions. It combines it sensitivity with low noise, but also has active Thermo-Electric Cooling (TEC) element to help improve image quality. So what you get is maybe not a whole lot pixels, but the quality of each pixel should be superb in ways of low light conditions.

So what’s so special with this image sensor that it is suppose to offer such superior low-light image quality? Well let’s dive into the details!

SONY Exmor IMX432 CMOS

Image courtesy of framos.com

The camera uses the Sony Exmor IMX432 CMOS sensor. So let’s have a look at some of it specs:

  • FPS: 98.6
  • Size: 1.1 inch
  • Pixel size: 9.0µm x 9.0µm
  • Resolution: 1.78M (1608 x 1104)
  • Shutter: global
  • Signal: monochrome (IMX432LLJ) or RGB (IMX432LQJ)
  • illumination: front-illuminated
  • CMOS

Resolution

Image courtesy of IEEE Spectrum

I don’t think resolution needs a lot of explanation. The image sensor is essentially a grid of individual pixels that make up the entire pixels. The grid exists out of horizontal rows and vertical columns, therefor the image sensor resolution is not only expressed by the total pixels count but also by the number of pixels rows and columns (W x H). The resolution tells you something about the sharpness of a picture (let alone that you focused your lens correctly) and the image aspect ratio (eg is it 16:9 or rather 4:3).

Sensor technology: CMOS vs CCD

An image sensor made with either CMOS or CCD technology uses a photoelectric effect to convert light photons into electric signals. The places where this happens can be thought of as buckets that collect the light. They’re generally called pixels. The way these pixels work are however different for both technologies.

Image courtesy of Gatan.com

CCD stands for Charged Couple Device. The sensor is an analog device that gets charged by light. The exposure is started and stopped for all pixels at once which we technically call global shutter. The photoelectric charge of the pixel is moved into a serial shift register (SSR) that sits at a layer below the CDD layer, and then amplified and AD (analog-digital) Converted into electrical signals at the output. The charge is only converted once which is highly beneficial for the low noise generation. Until recent years CCD sensor performed really well in low light conditions or within the near-infrared range (NIR).

CMOS stands for Complementary Metal Oxide Semiconductor. The mayor difference here is that each pixel has its own amplifier, and sometimes the pixel also includes an ADC. This makes CMOS sensor more suspect to noise. On the other hand this also allows you to read multiple pixels simultaneously, SMOS is typically faster than CCD tech. One other motivator for using CMOS is that they typically are less power hungry and they mostly come at a lower cost.

Where in the past CCD was mostly preferred for good image quality in low light conditions, recent years CMOS improvements has mostly compensated its cons and the market is mostly moving in CMOS nowadays. CCDs are a dying breed and it should be of no surprise that once you start looking around for the correct image sensor for your application that you’ll mainly (if not only) find CMOS sensors.

Speed (fps)

The speed of a sensor (aka frame rate) might not be something you’ve found to be a limiting factor for most of your everyday pictures. CMOS is generally known to be faster since CCD’s must transfer the charges into the horizontal shift register. High FPS tells you that the sensor is able to get real small exposure times. Whether the picture is really useful is a different question. Frame rate absolutely matters when the objects that you’re trying to capture are moving fast across the canvas. A good example is sports where you as a photographer want to capture the moment where a F1 car passes by. With low frame rates you could end up missing the object half or even entirely, while high fps enables you to take multiple shots of the car while it passes by and therefor gives you more opportunities to get that perfect picture.

Image courtesy of protectfind.com.au

Some manufactures have been creating image sensors specifically for Machine Learning and Computer Vision applications which high FPS can definitely make a difference in allowing you to snap a good picture for your AI application. Sometimes there sensors will have larger pixels wells (more on that later) for low light conditions and a global shutter to take away any distortion effects. And that may come in handy for our astro shots!

Global vs Rolling shutter

The shutter of a camera is the thing that sits at the front and controls how much lights goes in for how long we keep collecting light.

The larger the opening diameter (aperture), the more light will fall into the camera obscura and the less long you’ll need to keep the shutter open for collecting the amount of light needed to build your picture. The less time you need to expose the camera, the less distorted the picture will be when there are moving parts to be captured. It’s entirely similar as what we learned from our telescopes. So while you want to reduce the shutter speed as much as possible, you also need to make sure you get enough light so that the picture doesn’t appear too dark.

Image courtesy of Britannica

The shutter has been a mechanical one for decades, but nowadays there is also the electronic shutter which basically ‘enables’ the image sensor to collect light, or ‘disable’ it when not needed. This can be done in different sequences which each have their pros and cons. The first sequence is the rolling shutter. With this shutter mode the exposure will start at certain lines in the sensor array and build up until the entire sensor is capturing. Similarly when the shutter ‘closes’ the sensor is blocked from being illuminated line by line until the entire array is ‘off’ again. It’s kind of like a readout ‘wave’ is sweeping across the image sensor. The benefit here is that the sensor can already start the readout for the second frame while it’s also busy reading out the first frame. You get a 100% duty cycle and therefor higher maximum frame rate. The downside here is that lines are captured in their own life cycle, the lines in the middle will start collecting light much earlier than those at the top and bottom and therefor when you have large objects moving fast across the image surface they will look distorted.

Image courtesy of andor.oxinst.com

Here is another picture that tries the explain what happens with the rolling shutter:

Image courtesy of edge-ai-vision.com

With the global shutter this is different. The exposure begins and ends for each pixel at the exact same moment. This allow the sensor them to see everything at one shot. The downside is the time needed for charge build-up and discharge. Now there are also techniques where exposures can proceed after previous exposure are being readout from the readout nodes of a pixel. This allows the global shutter to also meet a 100% duty cycle. But since the global shutter is much easier to synchronize it often gets you practically the highest fps.

Image courtesy of andor.oxinst.com

Here is a typical example that explains well enough the distortion that happens with rolling shutters and fast moving objects:

Image courtesy of edge-ai-vision.com

As with astrophotography the camera is preferable properly tracking the sky object and therefore the image being rather still (and not shaken) we assume it doesn’t really matter which shutter technique is being used. You don’t particularly need a global shutter unless you’re trying to capture for example the ISS that’s passing by. But it may coincidentally be the case that you get a global shutter since they’re often used in ML applications where low-light condition are often issues that have to be delt with too. Anyway the IMX432 as used in the Bresser ASTRO Camera 1.7MP has a global shutter.

Pixel size

Image courtesy of vst.co.jp

The pixels size is something that isn’t mentioned a lot when talking about the specs of a camera. Often you need to dig a bit deeper to find what the actual pixels size is. Plus most people have more or less settled down over the years that the more pixels your image sensor has, the better the quality of the camera. Well that’s by far not entirely correct! The pixel size also plays an important role.

Image courtesy of princetoninstruments.com

Sensor pixels are like buckets that collect light photons. The larger the bucket (= pixel), the more light gets captured. It’s sometimes referred to as Full Well Capacity which is the amount of charge that can be stored within an individual pixel given an operating voltage. As you can see from the above picture, an increase in pixel size of 1.5 times results in a 3.8x more charge capacity. Once the buckets are full you’ve reached the maximum capacity. This is a phenomenon called saturation. Cameras should be designed so that they use the full dynamic range of the saturation level when taking pictures. Some camera’s allow you to display the saturation charts on their display. Blooming is another phenomenon that happens when pixels are unable to hold additional charge. In this case the charge will start to spread into the neighboring pixels.

Image courtesy of vision-doctor.com

There is a wide range of image sensor out there but mostly you’ll find them to have a pixel size in between 0.5µm to 10µm. The image sensors in your smart phones tend to be around 1µm, depending on the brand and of course nowadays smart phones are equipped with more than one image sensor and we’ve seen some of the higher quality camera’s go up to 2µm or higher. Now this is something where the IMX432 sensor really shines. The pixels are 9µm in size which means they’ll collect tons of photons extra and are very light sensitive!

Image sensor size

Image courtesy of thinklucid.com

The size if image sensor has been tied to the use case (and price range) mostly. Here is common graph to give you some idea:

As you already learned an image is made out of pixels, where pixels can differ in size, but also the amount of pixels (resolution) can differ a lot. Both combined will result in a sensor of a given size. The bigger the image sensor, the more pixels you can fit, or the larger you can make each individual pixel. It also means the more light will fall into the sensor. However the bigger the image sensor the more expensive, and it’s not even on a linear scale. Some of the reasons why smaller sensors are found in smartphones is not only about the size they can fit but also to keep the price of the device acceptable. In the high-end camera range that’s something entirely different, and that’s partially why those professional Nikon and Canon device are so bloody expensive. There is however a trend in smart phone cameras that – certainly in the higher end segment – the sensors keep on increasing over time. As an example Apple has increased the specs of the image sensor in iPhone devices from a 12MP sensor with 1.22µm pixels in the iPhone X to a 48MP sensor with 2.44µm wide pixels.

At this moment the high-end smart-phone market is shipping image sensors that exceed the size of those sometimes found in compact DSLRs! There is also a big increase in the Bill Of Material for those cameras, hence why the biggest improvements are always found in the high-end market segment.

Given all this you’ll by now understand that the IMX432 has insanely large pixels and they’re even outperforming all smartphone camera’s in low light conditions with ease. You may think the resolution is so low because it’s just a plain old chip sold for way too much money, but actually the image sensor is still not utterly tiny but measuring 17.55mm in diagonal. In comparison to the iPhone X the latter is boosting a 6.7 times higher resolution, though the IMX432 comes with 7.4 times bigger pixels sizes. The IMX432 is just something of a totally different bread than all of those DSLR and smart phone cameras out there. The IMX432 is specially made for this one purpose and that’s what also makes it so expensive.

Micro lenses

Image sensors contain electronic circuitry such as photodiodes that make the light-to-charge conversion but also wires, transistors, capacitors, … all shared within the same volume. Since the sensitivity of the pixel is largely depending on the amount of light that can fall on the pixel you now understand that the extra electronics will impact that pixel performance. The fill factor of a pixel describes the ratio of amount of light sensitive area to the total area of the pixels.

fill factor = light sensitive area of pixel / total area of pixel

For older CCDs that ratio would only be about 30%, which means 70% of the incoming light gets lost. That’s a severe big loss. CMOS sensors which carry even more electronics are performing even worse. Some of that loss of light has been compensated with micro lenses.

Image courtesy of corial.plasmatherm.com
Image courtesy of corial.plasmatherm.com
Image courtesy of thinklucid.com

As you can see the micro lenses help to collect more light into the light sensitive area. This boosts the so called quantum efficiency of the pixel to around 50% to 70% for CCDs that have a fill factor of only 30%. In more recent years even high efficiencies have been reached mostly by optimising the micro lenses and therefor without changing the fill ratio. The micro lenses are however not perfect either. They filter and weaken UV-light but also the quantum efficiency is dependent on the angle of incidence.

Parasitic Light Sensitivity (PLS)

CMOS sensors as a whole, but nowadays even global shutter sensor are becoming pore popular due to industry demands. They are vastly replacing the CCDs that were offering superior pictures more than a decade ago. However there GSCMOS sensors are however affected by Parasitic light sensitivity and Dark current. Manufacturers apply specific optimizations to overcome those issues as much as possible.

Image courtesy of mdpi.com

The IMX432, as part of the Sony Pregius series, focuses on low dark current and low PLS characteristics while achieving high sensitivity. PLS can be reduced by lowering the incident light to the memory node (MN). Over the history of Pregius sensors there have been small optimizations both electrical and optical so that resolution could be increased while PLS could somewhat be kept at similar low levels. Going into details is beyond this blog post but you if you’re interested you should definitely take a look at the Sony Pregius series website. PLS is not a common metric to be found within a sensor’s datasheet, but for your astro stuff it’s better to be on the lookout for those that do advertise low PLS values.

Front vs back illuminated

Another introduction is the so called backside-illuminated sensor. CMOS sensors typically have more electronics and therefore lower fill factors when the pixels get packed really tight in a high resolution chip. The back illuminated sensor actually reverses the internal chip layers so that the most of the wiring and electronics sit behind the photodiodes. The term back-illuminated refers to the fact that the sensor chip is now mounted reverse and therefor seemingly illuminated from the back.

Image courtesy of digitalcameraworld.com

As the drawing already illustrates back-illumination drastically improves the sensitivity of the chip. For CMOS chips we’re nowadays seeing quantum efficiency of above 90%! There are however also a number of drawbacks. There is higher dark current and noise added compared to front-illuminated counterparts. There is also decreased sharpness but with the help of micro-lenses this issue can mostly be solved. The manufacturing process is more complex and therefor brings extra costs.

The IMX432 however is front-illuminated. Given its large pixel buckets and relative low resolution the choice for FSI it not really an issue. Instead the front-illumination also assures lower dark current which improves the image quality in this case. Backside illumination came to a later generation of Sony Pregius sensors where the pixel size was slightly reduced to offer higher resolutions.

Mono vs RGB

By now we know that image sensors process light photons into electrical charge. We haven’t talked about specific colors pixels yet, so far we’re mostly discussing the pure (mono) image pixels. Light typically behaves as a wave pattern and this allows us to capture and see light of very specific colors (wavelengths). Image sensors are certainly not as equally sensitive for all colors, you’ll see that they perform better or worse depending on what wavelengths it’s looking at. To build a color image in RGB (Red-Green-Blue) a technique called the Bayer filter is used. The filter only allows light of a specific color (wavelength) to pass. Therefor one pixel tells you something about the red color tone within the part of the image, while a neighboring pixel does the same but then for the green color tone. Typically a Bayer filter exists out of 50% green filters, 25% red filters and 25% blue filters. This is specifically done to imitate the human eye which is sensitive to green light.

Image courtesy of wikipedia.org

To correctly build up the resulting image the values of multiple pixels need to be combined to give the color as we percept them. This process is typically performed in the ISP, a semiconductor block that deals with the raw-pixel data. Many camera’s will also add an IR cut filter that removes the noise from near-infrared wavelengths, but this really depends on what the camera will be used for. Sometimes you rather want to capture those waves too. The sensitivity of a RGB sensor is therefor even more complex than that of a mono color sensor.

Image courtesy of astrojolo.com

Therefor the RGB sensor is probable the sensor type that suits us the best as they allows us to build up a similar image of how we percept the living world, in full color. For the purpose of art we can always remove the colors again afterwards using any kind of image processing software. Many camera’s and smart phones even have such features on board. But there is a trade-off here. Each pixel only gathers light of one specific color which means some information gets lost. The resolution is slightly lower, and sensitivity is impacted due to the color filters. As you can see from the above chart the QHY183M/C CMOS sensor has a mono variants that slightly outperforms the RGB variant in Quantum Efficiency. So don’t judge those mono sensors as being old-fashioned, instead they’ll also cover some part of market that really aim to use the specific properties, e.g. security cameras use mono color sensor especially for low-light conditions since in the dark most color is absent anyway. Furthermore the mono sensors are also more sensitive to IR light what makes them a good candidate to combine with nearly invisible to the human eye light sources. In other words: in the dark this cameras sees you but you on the other hand will hardly be able to spot it. The IMX432 also comes in two variants of which the color variant is used in the Bresser ASTRO Camera that we highlighted at the beginning of this article. For astrophotography the RGB sensor is an obvious choice as it allows to get that colored shot at once. It’s without doubt the best option for anyone who starts doing astro shoots. For monochrome cameras, when you want to have a color picture as end result (which you mostly want), you’re going to have to take 4 pictures (through 4 filters) but also perform post-processing. Great software such as PixInsight will tremendously help you achieving that great end result but it’s going to take you some effort and hassle. Color sensors will give you a slightly lower quality end result way quicker and is therefor the recommended option for anyone starting in astrophotography.

Image courtesy of astrobin.com

The picture underneath is a comparision of the the M33 galaxy, first through a color sensor and afterwards through a mono sensor (3x). Exposure is kept similar, though the mono sensor requires 3 shots so the total exposure is actually 3 times longer.

Image courtesy of Terry Hancock at flickr.com

Pixel binning

Pixel binning is a technique that combines neighboring (usually 4) pixels into one larger virtual pixel. In case of 4 binning we’re typically speaking of a quad bayer filter. A quick calculation tells us that therefor a 1µm sensor could actually produce similar results of a 2µm sensor, ie more light sensitivity but also 4 x lower resolution.

Image courtesy of 8kassociation.com

The above picture shows how the Samsung ISOCELL HP1 sensor with 200MP uses pixel binning to deal with low-light conditions. The real pixels are only 0.64µm large and therefor not collecting a lot of light, though at bright conditions the insanely 200MP resolution can be obtained resulting in super sharp images. Pixel binning is rapidly finding its way into the market. However as you can see the Bayer filter is somewhat not perfectly aligned for virtually squashing adjacent pixels into one. Complicated processing comes along and as you’ll understand from this some information gets lost and therefor pixel binning is not an exact replacement for the large pixels that they’re trying to imitate, it’s more of trick to get close. Pixels binning is not typically done on low resolution sensors as basically it is something seen only in more recent years and mostly on high-res sensors. The IMX432 sensor also belongs in the category of low-res chips that due to the already very large pixels doesn’t need more pixel binning to improve the image quality as the lower resolution does have a more greater impact for this chip.

TEC cooling

Image courtesy of lairdthermal.com

While not exactly an image feature the Bresser Explore Scientific Deep Sky Astro Camera 1.7MP has onboarded another something to improve image quality. Bresser added a Thermo-Electric Cooler to their camera. TEC devices are electronic coolers that push heat from one side (cold plate) to the other side (hot plate) by applying a given amount of current. TECs are often used when the amount of heat that needs to be dissipated isn’t gigantic, and when the working area is rather compact. TECs can be found in all sort of sizes and format, and you can even stack them or put them in parallel. The biggest drawback is that they consume a considerable amount of energy. TECs are well known to be power hungry, and they’re also less efficient than phase change coolers such as your fridge.

Image courtesy of lairdthermal.com

The cooling directly reduces the dark current and therefor lowers the base noise levels. In low-light conditions the extra cooling may certainly make a difference. Since the load of the image sensor itself is not huge and given the small space to work in TECs are an excellent solution to improve the image quality of CMOS sensors. The Besser camera uses a 2-stage TEC that is able to take the sensor to about 40°C below ambient temperature! This costs however a bit of energy and therefor the Bresser camera requires a 36W power supply.

Going for cheaper

Image courtesy of astroshop.eu

While the Bresser ASTRO deep-sky camera looks like a good candidate for low-light planetary nebula pictures, it does take a huge part out of your pockets too. It has a price tag that not many are willing to take. There are however far cheaper variants. Take the Bresser Full HD DeepSky camera. It has following specs:

  • Sony Starvis IMX290 color sensor
  • FPS: 120
  • Size: 1.25 inch
  • Pixel size: 2.9µm x 2.9µm
  • Resolution: 2.1M (1936 x 1096)
  • Shutter: rolling
  • illumination: back-illuminated
  • CMOS

The IMX290 has been at the center of many low priced camera’s intended for astro photography. For example there is also the Player One Mars-M and Svbony SV305 of which the latter is the cheapest one you’ll probable find. Let’s compare it to the Bresser ASTRO camera that comes with the Sony IMX432 sensor. We see that the IMX290 slightly bumps the resolution, although nothing noteworthy maybe. The IMX432 is undoubtedly much more sensitive than the IMX290 due to the vastly larger pixels although the IMX290 does a bit of compensation by using back-illumination. Maybe the biggest difference of all is that this so called cheaper astro-photography cam can be found for less than € 300. This is probable a lot closer to most amateurs their budget.

Camera selector

If you’re now convinced into building your own astro cam or you just want to have a look at what sensors are available on the market you may want to look at e-con Systems camera sensor selector app.

Conclusive thoughts

We’re nearing the end of this article The idea is to get you going in understanding a bit more about the mysteries behind image sensors and their often immensely expensive camera hosts. As we’ve seen there has been a lot of development over the years, camera sensors are still improving and sensors have been made for all sorts of goals like miniature cameras, astro cams, DSLRs, video cameras, computer vision camera’s, and so forth. It’s really not all about megapixels, but neither to having a large pixels. There’s a balance to be made. There isn’t a one sensor suits it all solution here, depending on your goal you’ll end up with one specific group of sensor types manufactured for that specific use. Particularly we looked at how the Sony Exmor IMX432 is very well suited for deep-sky low light photography due to its insanely large pixels and low noise levels. But it comes at a trade-off of paying premium prices and settling for lower resolutions. With image sensors there are always pros and cons: good things but also trade-offs to be made. The sensors keep on getting better but it’s not like the semiconductor industry which shows a rapid growth in transistor count each few years. A decent amount of the circuitry is still analog which holds back the rapid speed of improvements that we see with typical CPU and other computing semiconductor industries. More’s law doesn’t apply here.

I hope you found some useful info here. Although we did touch a few subjects to get you around with a basic understanding there is still lots more to discover about image sensors. Google is your friend. In the next chapter I’ll finally take the theory into practice, stay tuned!

Astrophotography from a beginners perspective, part 1: optics and mechanics

Last year I came to discover that astrophotography with the current generation of smartphones is perfectly within reach. I shared some of the results I reached using either the Samsung stock cam as a modified GCam on a Samsung Galaxy S20 FE. Later on I also discover the following blistering picture on Reddit:

Image courtesy of Great-Studio-5996 at Reddit.com

The picture was also made with a Samsung S20 FE using the default cam app, and according to the original poster was made with 30s exposure and iso 3200 and the app in Pro mode. You’ll notice there is a small effect of star trails in this picture due to the large exposure time. He claims that the brightest spots are planets. I assume the creator is talking about the one left of the Pleiades and not the one down at the bottom left of the picture as that may well be Sirius. The picture was taken in Kerala, South India, on what I assume to be a quite dark location given the results he got.

My first telescope

After the death of my father I started to realize that I’ve been interested into astronomy since my teenager years. I had a book to teach me some basics and I remember at one stage I even made some drawings about my observations, for instance there was the famous 1997 Hale-Bopp comet in there! I also tried to do some observations using binoculars but I never came to own a telescope. So this year I decided to finally get on with that childhood dream…

For the first telescope I went low price. I knew it could be a disaster (and it also kind of was), but it allowed me to get into business and actually understand the things that you kind also read about if you spend some time before buying. I went on and bought a second hand National Geographic 90/900, tube only so no mounting was included.

National Geographic 90/900 refractor mounted on EQ3 tripod

While the picture above shows you the original telescope and mount, since I only had the tube but had a spare aluminum camera tripod I decided it shouldn’t be that hard to mount the tube on top of it. In my first tryout I had some issues with getting to see anything at all, but later on I guess those issues were related to a the SR6 oculair and not collecting enough light. The camera tripod, while very handy to use with cameras, is also a very bad mount for telescopes. The thing with telescopes is that the level of magnification is 50x, 100x or maybe far more depending on your telescope. It means that even the slightest handling of the telescope makes your view totally unstable and shaky. The mount was also not strong enough to hold the tube in place once you had tracked down something, it always kind of sank a bit lower in the end after you’ve vastened everything. Even breezes of wind made the telescope shake a little bit! The end verdict was that I learned that with telescopes, more expensive is mostly with a good reason. And even though I did get to view Saturn for example, I also realized that it was very painful to get it observed properly because the view never ever was was very stable, let alone you could take a decent picture from what you’re looking at. I realized telescopes mounts made a great deal of the experience and decided to upgrade the mount…

Telescope mounts

So what’s a decent mount anyway? Well for starters there are different types of telescope mounts. Overall there’re mostly categorized as altazimuth, equatorial or dobsonian mounts. For more advanced use cases there is also the so called Startracker and GoTo mounts, but they’re kind of developed on top of the earlier mentioned types.

Altazimuth mount
This mounting type is simple in use an therefor often recommended for starters. Basically you can move your telescope around 2 axis: up/down (what astronomers refer to as ‘altitude‘) and left/right (what is called the azimuth). Try to memorize the names of the axis since it’s also used in the coordinate system that tells you where to find stellar objects

Image courtesy of timeanddate.com

The above picture shows the most basic altazimuth mount you can find. If you spend a bit more you’ll often find extra handles that allow you to do slow motion control for each axis. You’ll certainly appreciate this, once you have an object in focus. Know that due to the earth rotation objects may only stay in view for 1 or 2 minutes, and often less depending which magnification you’re using. Overall these mount or made of aluminum which makes them light and portable. The more expensive ones can be made of steel and mostly are made for heavier tubes while offering more stability. While good for entry level astronomers know that this type of mount is typically not chosen for photography. It can but given the fast movement of objects in your oculair you’ll have to settle for a short shutter time.

Equatorial mount

The equatorial mount is a slightly more complicated type of mount and therefor often not recommended as a starters mount. With the EQ mount the axis to move along with are called the declination and ascension axis. But there is one specialty, if you want to decently use this mount you need to get it properly aligned with Earth’s rotation axis.

Image courtesy of naasbeginners.co.uk

You’re probable well aware that the Earth’s rotation axis goes straight through the Earth from the north to south the pole. From the place where you live that rotation axis is not right above your head but instead much lower, perhaps even at an altitude of 45°. By coincidence the Earth’s rotation axis goes though the Polaris star which can be easily found if you know where to look. Hence if you ever need to know how navigate north at night just look where Polaris is at the night sky in head into that direction. Now as I said earlier the EQ mount requires to be properly aligned with this axis. Look at the above picture to get a better understanding. Once you have reached that alignment you can adjust the declination control to kind of move away from polar axis. Rotate around the Earth’s rotation axis using the right ascension (RA) control. As you read this for the first time this may all seem a bit confusing, but it also kind of makes sense. The mount takes some practice to get used to it, but it also has some benefits. Since the telescope can now be adjusted along the Earth’s rotation axis just as the celestial objects seemingly do, all you need to do once you have an object in view is that you slightly adjust the ascension control over time to rotate your telescope along with Earth’s rotation. Therefore the EQ mount is really great for photography of deep sky objects that require long exposure times. The hassle however is that you need proper polar alignment which always takes some time to get right each time you take out your telescope. Furthermore on a half cloudy night, while the thing that you want to observe or shoot is perfectly in sight you make lack the view on the Polaris start and not be able to properly align your EQ mount. Within the EQ mount category there are many different variations and flavors. The differences to be found are in materials (aluminum vs steel), handles, weight, counterweights (which are used to keep everything in balance), slow motion control, ability to adopt to motor control, … The EQ mount is typically found on non entry level telescopes given they’re a bit more expensive and harder to use.

Dobsonian mount

Dobsonion mounts are specifically designed to hold the Dobsonion type of telescopes. It’s very simple in basis and is very similar in usage as with altazimuth mount: moving the telescopes goes along up/down (altitude) and left/right (azimuth) axis. The biggest difference is that Dobsonian mount don’t really have a tripod at all but instead come with a large and bulky construction often made in wood that is able to support heavy large telescopes.

Image courtesy of celestron.com

The mount itself has at the base some kind of turntable which allow it to move along the azimuth axis. The telescope is vastened to the mount over a horizontal axis which allows the up/down movement that we need for altitude adjustments.

Overal telescopes on a Dobsonian mount tend to be less portable as the setup is large and heavy. However the usage of wood makes it still possible to move them from a safe storage room to the outside without sacrificing any stability. It also makes the mount cheaper and they’re easier to manufacture too.

GoTo mount

The GoTo mount isn’t really a new type of mounting. Generally speaking it is a motorized variant of one of the above types. Generally they’re controlled by a computer system or hand controller which dictates the mount where to point the telescope too. Most of the time they can also track celestial objects which can be handy for your observations but that aside they’re also very useful for astrophotography since locking the object in sight allows for longer exposure times. Know though that if you’re really interested in deep-sky objects the motorized equatorial mount is still much favored over the altazimuth mount since it only has to move along one axis, the RA axis. The added features of a GoTo mount however doesn’t come cheap, expect to pay premium prices compared to non-motorized mount. Also always keep in mind that for those long exposure shots the mount needs to be very stable, so it’s always better to go for a bulkier mount, but know that this also adds up to the total cost.

Star trackers

Start trackers are kind of a miniaturized versions of the GoTo mount. They’re similarly computer controlled but are targeted at holding camera devices mostly, maybe an additional telelens or small refractor, but never any serious sized telescope. So while they’re not useful for observing things, given their tracking abilities makes that they’re quite useful for long exposure shots of the night sky. They’re more portable than GoTo’s and also cheaper, but expect to still pay few hundreds of Euros for a device that mostly used for the specific purpose of photography.

image courtesy of astrobackyard.com

Telescope types

So I was stuck with this refractor type of telescope and a handful different types of mounts that come in various flavors onto the market and each having their own kind of pros and cons. What to choose? And what’s compatible? Do I go for low price or heavy duty and feature rich? As I explored the second hand marked and reading reviews I quickly came to understand that willing to have long exposure shots mostly requires an EQ type of GoTo mount with tracking abilities. I then realized they don’t come at all, and that any decent mount can cost you € 1000 easily. So maybe I should lower my expectations a bit and just go for a stable mount and just settle for short exposure shots. In this case however you seems to get in a different kind of ballpark as suddenly the EQ mount is no longer ‘required’ and you can actually choose the cheaper altazimuth or Dobsonian mounts. After investigating in that area a bit about what makes a decent and stable mount mostly what you should avoid is that those aluminum ones. Go for a decent steel mounting. The thing is that I couldn’t really find any decent on the second hand market, so maybe I should rather ditch the current telescope tube and settle for a complete setup instead? I come across less than a handful decent altazimuth and EQ mounts but they mostly came together with a Newtonian telescope that was not of the best quality. Instead there were quite a few Dobsonian telescope to choose from, but they looked so clumsy to me. But maybe that’s just because I’m still not very familiar with the different types of telescopes. So what’s up with that? Well I was already kind of aware about the refractor types which basically we all so in our imagination when asked about how a telescope looks like. And then there are the other which for me all looked pretty similar, except maybe in some cases with some extra mirrors for extra amplification. Well it turns out it’s not that simple. But first something about the basics.

Telescope basics

Light travels into the telescope through the objective or aperture and reaches the eye through what’s called the oculair or eyepiece. Tubes have different lengths and widths and that’s not without reason. Inside the telescope light may travel across flat or parabolic mirrors, which do have an effect on the end result. All combined a telescope will have a certain level of magnification, the higher the magnification the bigger you get something to show up that’s too small to be picked up by the naked eye. It also narrows the view.

image courtesy of skyandtelescope.org

An important rule to understand is that magnification is limited by the amount of light that can be collected at the aperture (= the main lens or mirror). Another interesting aspect is the focal length of the objective (mostly referred to as the focal length of the telescope given they’re fixed and can’t be upgraded) and the focal length of the eyepiece. The formula is simple:

magnification power = telescope focal length / eyepiece focal length

Know that the telescope focal length is fixed but eyepieces can be exchanged so you actually have a choice in what level of magnification you want to use. For example your telescope may well come with 20mm and 10mm eyepieces (the mm here is not the diameter of the eyepieces but instead their focal length!) suited for different kinds magnification and thus different kinds of observations. In case this telescopes focal length is 800mm, this would result in a magnification of respectively 40x and 80x.

But as I said the magnification is also tied to the aperture and which defines the amount of light collected. If there is not enough light falling into your telescope the end result will be that you don’t see anything at all. Doubling the level of magnification actually reduces the brightness of the image by a factor of 4. Vice versa, if you double the aperture it also means you’ll collect 4 times as much light which result in a brighter image. So the theoretical level of magnification is actually limited by the aperture. This is referred to as the “highest useful magnification” and can be calculated by multiplying the diameter size of the aperture (in inches) by 50 times. For a 6 inch telescope that number will by x300. When you reach this limit you’ll have a very dim image that’s not worth much. There is also a lower bound, this is referred to as the “lowest useful magnification“. It can be calculated by multiplying the diameter size of the aperture (in inches) by 3 to 4 times. A 6 inch telescope will have a lower magnification boundary of about x18 to x24. The lower the magnification the wider the field of view. While magnification is important don’t stare yourself blind at it. You may think that the more magnification your telescope has the better it allows to observe objects in detail. However having a wider field of view may also play a role for example to observe large entities such as the Andromeda galaxy. Aperture is plays an important role as it will tell you something about the amount of light collected and reaching your eyepiece and may make a crucial difference when comparing telescopes with similar magnification levels. It’s perfectly possible that having a smaller level of magnification but a higher aperture will result in a better viewing experience.

Saturn and the moon seen through the National Geographic 90/900 refractor. Image courtesy of Bresser.com

The weather conditions also play a role, and we’re not speaking about cloudy nights here, but really about the atmosphere. Sometimes there is more turbulence in the atmosphere which may make you image a bit fuzzy and dim. It may impact your level of magnification and you may need to settle for oculairs with bigger focal lengths.

The focal ratio of your telescope differentiates “slow” and “fast” telescope from each other. It can be calculated as following:

focal ratio = telescope focal length / aperture

Image you have a telescope with a focal length of 500 and an aperture of 50, in this case your f-ratio will be 10. Take another telescope with the same focal length but with an aperture of 100 than the f-ratio will be 5 instead.

Refractor telescope

Parts of a refracting telescope (©2019 Let’s Talk Science based on an image by Krishnavedala [CC BY-SA 4.0] via Wikimedia Commons).

This is the classical type of telescope that we all think off when asked for. The light enters the telescope through the objective lens. The rays of light converge at the focal point and makes it way through the eyepiece (oculair) out of the telescope again. With reflectors if you need a long focal point for higher levels of magnification you’ll end up with longer telescope tubes too. A focal length of 900mm will give you a tube of at least one meter. The quality of the lenses may play an important role in the image quality.

Reflector telescope

Path of light rays through a reflecting telescope (©2019 Let’s Talk Science based on an image by Krishnavedala [CC BY-SA 4.0] via Wikimedia Commons).

With reflectors it becomes a bit more complicated. Here the light enter the telescope directly, there is not objective lens. The light travels through the entire scope only to reach a reflective mirror at the opposite side of where it entered the scope. This mirror is the primary mirror and it features will tell you something about the quality of the scope. We’ll come back on this on a few moments. Next, light is bounced back onto the secondary mirror which bounces it again but this time perpendicular to the scopes orientation. While with the refractor you gaze more or less directly through the telescope, with reflectors you actually more or less sit on top of them. A small benefit of the reflector is that the focal point is beyond the second mirror and therefor a part of the converging path is perpendicular to the reflected light from the primary mirror. Hence the tube can be a bit shorter compared to a refractor to meet the same focal length. Some telescope vendors opt for spherical primary mirror to further increase the focal length for the same sized tube, but mostly this result in bad image quality and in general those scopes are not recommended. Parabolic mirrors are preferred as they’re more precise and have only one focal point. This will result in clearer images. Know that when the mirror is of bad quality the image is not always very clear, may give artifacts and you may have issues with higher levels magnification which as kind of a pitty for the price that you paid.

Another thing with the reflector scopes is that they mostly have a larger aperture which helps tremendously.

Other telescope variants

I’ve only highlighted the two main telescope categories. Throughout the years many more designs have been introduced. The classical Newtonian reflector has been adopted to even have more mirrors inside to further reduce, and also refractors have been adopted to become more compact without sacrificing the viewing experience. There are some many variants that it would take me forever going through them all and discuss their pros and cons. Even for the telescopes that I did include in this article there are things that I didn’t want to get into as it will take us to far away.

My second telescope

So with all of that information in mind I went on and on to see which of the second hand offers would suit me the best. As I already realized earlier I might have been chasing the wrong idea. I didn’t want to spend € 500 to € 1000 for a motorized EQ mount on a hobby that I’m just tacking up once in a while, because not being able to take long exposure shots is maybe not the end of the world either. I also didn’t want to make the mistake again to settle for something that is generally known within the community as bad quality. As I understood many great sub € 500 telescopes were actually of the Dobsonian type. There are off course other telescopes within that price range that they compete with, still mostly Dobsonians came out best in the reviews of trusted reviewing sources. Here is why:

  • handling: the Dobsonian is easy to use, moving it feels very natural and doesn’t need a lot of practice to get used to it. EQ mounts are mostly harder to learn and take some time to setup.
  • steadiness: the mounting is very steady, it’s in a whole different league compared to aluminum lightweight tripods
  • aperture: refractors are something referred to as light buckets. Their design allows them to collect more ligt compared to similarly priced refractors which can come in handy when increasing the level of magnification
  • parabolic mirror: in the lower end segment you need to be careful about the mirrors that are used in the reflective telescope. I noticed quite a few Newtonian reflectors mounted on either altazimuth or EQ mounts that come with a spherical primary mirror. Somehow that seems to be less the issue for Dobsonians in that price area.
  • price: while looking big and expensive, the contrary is often true. Dobsonians are very competitive even in the sub € 500 price market

One of the offers I could find is the Sky-Watcher Classic 150P.

image courtesy of skywatcherusa.com

The Classic 150P has a 150mm (6 inch) aperture. According to the Sky-watcher website that’s a 232% increase in brightness compared to a 100mm refractor. It should yield even better results compared to the 90mm National Geographic refractor that I purchased earlier. Another nice comparison: it’s 460 times brighter than the human eye! The maximum magnification level is around x300. It comes with the typical stable Dobsonian mount and handy tension controlled handles to move it smoothly but steady. It features a parabolic primary mirror, which stands for decent image quality. The focal length is 1200mm which is also quite an increase compared to the 90/900 refractor. The F-ratio is 7.9 which puts it in between narrow and wide field telescopes. The scope comes with 2 eyepieces that have a either a 25mm or 10mm focal length. They result in a magnification of respectively x48 and x120. Maybe one downside is that it may have also been equipped with a 6mm eyepiece too which would settle use with a magnification of x200 which it still well within the limits of this telescope. Any other eyepiece with an even shorter focal length would probable put you at or beyond the boundaries of what this scope can handle, so for those we want more there are also the 200P and 250P who respectively have a maximum level of magnification of x400 and x500, but off course at a more steep price. The Classic 150P settles at about € 430 on the Sky-Watcher website, however I was able to get one in perfect condition for € 230 which seemed to be a good deal and finally decided that would be it.

First astro shot

So I was finally settled for my first decent observations. I didn’t had the best weather until now, however I did succeed to get a glimpse of Saturn and I even got to record it with my Samsung FE20 smartphone holding it by hand! This by itself was certainly not within reach when I used the NG 90/900 refractor and alu tripod. I converted the video into a gif animation and I cropped it to make it better fit on this blog site. Through the telescope it does look maybe a bit smaller but much sharper. You can see the smartphone has some issues getting it’s focus correct, which is not that strange given I’m holding it by hand. The eyepiece in use is the 10mm one. As I mentioned earlier this gives my x120 magnification. Here is that shot:

For now I’ll have to deal with this result. In a followed article I’ll finally come to the photography part which was initially where it all started with. But before we got there we had to make a little detour so that you understand to road I had taken. At least I hope you enjoyed and maybe learned something along the way. Stay tuned for more.

Making your own Home Assistant Add-On

Home Assistant is one of the most popular open-source home automation solutions, and also my personal preference for few years now. It’s open-source which allows my to debug stuff more easily since I’m able to look onto the source code itself when I find something is not working. Further more most of its features are free: you just download the software, install it, and you’re good to go. There is also decent documentation, and furthermore because it’s not widely used the community is mostly willing to help you out if they can. And off course it also helps that it’s running on the insanely popular Raspberry Pi, I must admit.

Add-ons, but why?

Add-ons is one of those other feature which is really nice about Home Assistant. It allows you to build new stuf into Home Assistant without having to touch the core software. There is currently a broad set of official and community driver add-ons that can easily be deployed from the Home Assistant user interface all with the click of a button. All together Home Assistant will probable cover most of your use cases, but their may be some corner cases where it may not fit your exact needs. Unfortunately I found myself in one of those corner cases where I had started automating my house with relay modules that I bought from a previous employer before I on-boarded Home Assistant. In those earlier days I had made the complete home automation software stack myself: tuned Raspberry Pi operating system, backend software (REST API that wraps the .Net libraries needed to work with those relay modules), and mobile Android app. It was fun while it lasted, but I found out quick enough that if I wanted to expand the possibilities of that system that I needed a foundation to build upon instead of doing everything myself. And that’s how I came to try out some open-source automation suites. Home Assistant was particularly interesting back then because it had an easy way of deploying itself using docker images, I found it easy to use plus it could also easily be interfaced with through MQTT. All I had to do was writing that MQTT interface code so that aside of the HTTP REST API that I already had, the relays where also announced over MQTT and could be communicated with. Huray!

But I found this was not enough. As most of you may have encountered too the Raspberry Pi’s sd-card gave up after some time and it took me too much time to get everything up and running again so I wanted to streamline some of that stuff. I noticed by then that the HA guys had come up with a pretty decent embedded linux distro, so I decided to give this a chance too since it will remove those steps of setting up and tweaking the OS myself. HA’s OS literally allows you to download an image from their website, deploy it to an sd-card and boot right into the HA user interface. But as a drawback I had to pick up modifying my own software again so that it installs within Home Assistant… as an Add-On!

Where to start

The best place to start writing your own add-ons is by going to Home Assistant developer’s documentation that’s focusing on brewing your own add-ons. Important to understand is that Home Assistant Add-Ons basically are Docker containers with a few environment variables and arguments predefined, plus some pre-wired bits here and there. So the basic concepts of Docker containers and their images apply here as well. First you need to build an add-on image similar to what a Docker image is. Once you have that you can either run it locally, or you distribute it online and have someone else run a container instance of your add-on. Vice versa: someone else can also deploy their own add-on images so that you can run them yourselves on your own local setup, hence what the officially supported HA Add-Ons basically are doing.

As the docs explain to you there are 2 ways of deploying your add-ons to your own Home Assistant setup:

  • locally: means build and install on the Home Assistant’s machine
  • through publishing: build on a developer/build machine, host online and from then take your Home Assistant’s machine and install it

Option ‘locally’ is the easiest one to start with as it involves the least amount of infrastructure to setup. You can try build it on your PC first, and copy the entire sources that need to be build to the target Home Assistant machine and build it from there (again). My guidance here is that you should always first try to build it on your development PC as in nearly all cases it will build way faster than what the Home Assistant machine can do. The HA team has setup a Dockerized build environment so that you can easily pull in those build dependencies and start using them without contaminating your host OS. Look for the HA builder source repo if you want to find out more. But first we’re going to need to setup some meta-data files and a proper directory layout.

Start by creating a new empty folder. In my case I’ve also created the build subfolder. This is not required, but in my case it contains the binaries and config files that I need to run my actual application. Also create the run.sh script, since this is the one that’s going to be executed by the Add-on once it is being started:

#!/usr/bin/with-contenv bashio

echo "Listing serial ports"
ls /dev/tty*

echo "Running..."
cd /app
export MONO_THREADS_PER_CPU=100
mono ShutterService.exe

Create a build.json file that defines the base layer from which your Dockerfile is going to start:

{
    "build_from": {
      "aarch64": "homeassistant/aarch64-base-debian:buster",
      "amd64": "homeassistant/amd64-base-debian:buster"
    },
    "squash": false,
    "args": {
    }
  }

Also create a config.json file that describes your add-on:

{
    "name": "ATDevices to MQTT",
    "version": "1.0.0",
    "slug": "atdevices_service",
    "image": "afterhourscoding/ha-atdevices-addon",
    "description": "Service that exposes Alphatronics gen1 and gen2 devices to Home Assistant",
    "arch": ["aarch64", "amd64"],
    "startup": "application",
    "boot": "auto",
    "full_access": true,
    "init": false,
    "options": {
    },
    "schema": {
    }
}

Note that nowadays Home Assistant is mostly referring to yaml files for config, but the json files are still reported and it isn’t particularly hard to swap from one format to the other.

Then there is also the Dockerfile:

ARG BUILD_FROM
# hadolint ignore=DL3006
FROM ${BUILD_FROM}

# insta mono

ENV MONO_VERSION 5.20.1.34

RUN apt-get update \
  && apt-get install -y --no-install-recommends gnupg dirmngr \
  && rm -rf /var/lib/apt/lists/* \
  && export GNUPGHOME="$(mktemp -d)" \
  && gpg --batch --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF \
  && gpg --batch --export --armor 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF > /etc/apt/trusted.gpg.d/mono.gpg.asc \
  && gpgconf --kill all \
  && rm -rf "$GNUPGHOME" \
  && apt-key list | grep Xamarin \
  && apt-get purge -y --auto-remove gnupg dirmngr

RUN echo "deb http://download.mono-project.com/repo/debian stable-stretch/snapshots/$MONO_VERSION main" > /etc/apt/sources.list.d/mono-official-stable.list \
  && apt-get update \
  && apt-get install -y mono-runtime \
  && rm -rf /var/lib/apt/lists/* /tmp/*

RUN apt-get update \
  && apt-get install -y binutils curl mono-devel ca-certificates-mono fsharp mono-vbnc nuget referenceassemblies-pcl \
  && rm -rf /var/lib/apt/lists/* /tmp/*

ADD ./build /app

# Copy data for add-on
COPY run.sh /
RUN chmod a+x /run.sh

CMD [ "/run.sh" ]

At last you can also dress up your add-on by providing a README.md, a logo.png and icon.png.

And here is a tree-view of my folder containing all sources:

$ tree
.
├── build
│   └── binaries that make the actual application ...
├── build.json
├── config.json
├── Dockerfile
├── icon.png
├── logo.png
├── run.sh
├── buildAddon.sh
├── README.md
└── testAddon.sh

Running the build as quite an extended command that I don’t prefer to manually enter each time, hence I’ve also setup a script to perform those PC builds of my add-on:

#!/bin/bash

BUILDCONTAINER_DATA_PATH="/data"
PATHTOBUILD="$BUILDCONTAINER_DATA_PATH"
#ARCH=all
ARCH=amd64


PROJECTDIR=$(pwd)


echo "project directory is $PROJECTDIR"
echo "build container data path is $BUILDCONTAINER_DATA_PATH"
echo "build container target build path is $PATHTOBUILD"
CMD="docker run --rm -ti --name hassio-builder --privileged -v $PROJECTDIR:$BUILDCONTAINER_DATA_PATH -v /var/run/docker.sock:/var/run/docker.sock:ro homeassistant/amd64-builder:2022.11.0 --target $PATHTOBUILD --$ARCH --test --docker-hub local"
echo "$CMD"
$CMD

Running the build script may take a while… Afterwards I’ve also tried running that container we’ve just build using the testAddon.sh script:

#!/bin/bash
docker run --rm -it local/my-first-addon

Let’s see that output:

$ ./testAddon.sh 
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
s6-rc: info: service legacy-services successfully started
Listing serial ports
/dev/tty
Running...
###########################################
[21:49:32,457] [INFO ] [SHUTTERSERVICE] [1] [Domotica] [Main] ###########################################
[21:49:32,466] [INFO ] [SHUTTERSERVICE] [1] [Domotica] [Main] Version: 1.2.5.0
...

Bingo! Okay now copy those files to the Home Assistant machine’s /addon folder. Next steps is to perform the build again, but since we’re now doing this on the HA machine the add-on will be picked up by the user interface and you’ll be able to install if from there on. But first repeat the steps in the same manner as given on the HA docs:

  • Open the Home Assistant frontend
  • Go to “Configuration”
  • Click on “Add-ons, backups & Supervisor”
  • Click “add-on store” in the bottom right corner.
Open your Home Assistant instance and show the Supervisor add-on store.
  • On the top right overflow menu, click the “Check for updates” button
  • You should now see a new section at the top of the store called “Local add-ons” that lists your add-on!
  • Click on your add-on to go to the add-on details page.
  • Install your add-on

Be sure to start the add-on and inspect the logs for anomalies.

Improved way of working

Now that we have the basics working it’s time to improve upon that. Because what I dislike about the previous approach is that it takes a very long time for the build to complete on a Raspberry Pi. In case I ever have to rollback it may take most of my time switching from one build to the another and vice versa. So I decided to cross-build the Add-on image and host it online so that it can by pulled in by my HA machine without ever having to build something. Know that cross-building is not a big issue as the HA builder can do that out of the box. Before we can start hosting things there are some modifications needed to our add-ons source code which allows HA to pick it up. Because what is going to chance is that we no longer have any files manually copied to the HA machine. The /addon folder no longer needs to contain a copy of our add-on sources since it’s no longer performing the build itself. This should therefore also free up some disc space! Go ahead and remove those files, and don’t forget to hit the update add-ons button using the UI so that any reference to our local build add-on is removed. However once we have our add-on hosted somewhere HA is going to need to know where to pull these pre-build container images from, and it is this magic sauce that we’ll be cooking next.

Let me first briefly explain what we want to achieve here. Home Assistant relies on the concept of add-on repositories. An add-on repository basically is a collection of add-ons from which people can choice which one they want to install. Much alike the software repositories found in your favorite linux distro. Anyone is free to create and host their own repositories, but it is mandatory of you want to tell HA what add-ons you have and where it can download those pre-build images from.

We with restructuring a bit: create a new directory in the top of your project, name it to your addon and move all files that we previously had into that folder. Also create repository.json in the top of your project map:

{
  "name": "Home Assistant Geoffrey's Add-ons",
  "url": "https://afterhourscoding.wordpress.com",
  "maintainer": "Afterhourscoding <afterhourscoding@gmail.com>"
}

This file is just that tells other about what’s the repo named like and who the maintainer is. Next we’re also going to need to list what add-ons are to be found in our repository. Therefor create the .addons.yml file:

---
channel: stable
addons:
  atdevices:
    repository: afterhourscoding/ha-atdevices-addon:latest
    target: atdevices
    image: afterhourscoding/ha-atdevices-addon

The image name refers to the one it can find docker hub, as if you would docker pull afterhourscoding/ha-atdevices-addon. Don’t worry if the image is not hosted at this stage, we will do that later on. Finally here is a tree-view of all these changes:

$ tree
.
├── .addons.yml
├── atdevices
│   ├── build
│   │   └── binaries that make the actual application ...
│   ├── build.json
│   ├── config.json
│   ├── Dockerfile
│   ├── icon.png
│   ├── logo.png
│   ├── README.md
│   └── run.sh
├── buildAddon.sh // this is the script I've shown you above
├── repository.json
└── testAddon.sh

Next we’re going to put our add-on repository in public space and set up HA so that it can parse the add-ons index. HA deals with repositories as if it were git repo’s. So enter git init in your command line and basically do all the stuff that you’d do with your other git projects including uploading to github. Afterwards in HA’s UI go to the add-on store.

Open your Home Assistant instance and show the Supervisor add-on store.

In the overflow menu, select “Repositories” and enter the HTTPS URL to your github repo. In my case I had to choose for hosting it my source code privately which makes things a bit more complicated. I rather not but hey sometimes we have to do deal with closed source binaries that you may not redistribute yourselves. For those protected repo’s to work you need to add a Personal Access Token to your project in github and give this token ‘repo’ acces. The token can than be put in the URL so that HA is able to fetch the repo through the token ownership. Keep in mind that this is stored non-secure on your HA setup! Use the following format for private hosted repo’s:

https://USERNAME:PERSONALACCESSTOKEN@github.com/USERNAME/REPONAME

This was just the first step. Next step is hosting your add-on container image on Dockerhub. Go ahead and create a Dockerhub account. One thing you could do now is adjust the buildAddon.sh script so that it is no longer running in test mode. I’ve went for another option, one where I’ve setup a Github Action on my git repo so that I get server builds which automatically push my add-on images to Docherhub. Here is my GH Action:

name: "Publish"

on:
  release:
    types: [published]
    
  workflow_dispatch:

jobs:
  publish:
    name: Publish build
    runs-on: ubuntu-latest
    steps:
      - name: Checkout the repository
        uses: actions/checkout@v3
      - name: Login to DockerHub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      - name: Publish build - Home Assistant builder
        uses: home-assistant/builder@2022.11.0
        with:
          args: |
            --aarch64 \
            --target /data/atdevices

Note that you also need to setup these 2 secrets that make your Dockerhub login because the build user will have to login on your behalf. This GH action can be triggered manually through the GH webpage:

Launch and wait, the build can easily take 10 minutes. Once it has completed go back to the Dockerhub website, you should now see your add-on image added:

One last final thing we need to do is enter your Dockerhub credentials in Home Assistant. This is only required for privately hosted images. Go back to your HA add-store, click the “Registries” menu option and add your registry:

Finally click “Check for updates”. It should now find your add-on again:

That brings us to the end of this small article. We’ve looked at ways how you can make your own Home Assistant Add-On and even keep it hosted privately. The workflow where you can have a build server automatically push the container image so that you only have to use Home Assistant user interface to update your add-on makes the process a little less handcrafted and a tad more professional looking. I hope as always that you find something useful in this. Credits go to whoever has been working on Home Assistant and those people responding in the community forums. I hope you find this encouraging enough to go that extra mile, who nows maybe one day you can make some money out of it. PS: did you now there are nowadays companies such as Homate selling products that use Home Assistant in their base, what’s next?!

Building the Elector Nixie clock

The retro looks of Nixie tubes sure is something that many people can appreciate including myself. Browsing the internet you’ll find several popular usages of these ancient electronic relics such as clocks thermometers, VA-meters and more. The tubes itself are not always literally ancient though, they’re still build and sold brand new so when you start looking around you should still be able to grab a few for your own. But I’ll warn you: the do not come cheap!

Enter the Electro Nixie clock! Elector has been offering a kit where you can build your own Nixie clock yourselves. Given that those tubes take a high voltage I thought it would not be a bad idea to try the kit instead of doing the full electronics design and such myself. It comes with a members discount plus I got special discount which made the price more acceptable. Elector also designed a housing for it to give it that extra spark as you can see below. I went for the Elector deal but didn’t go for the acrylic housing because I wasn’t particularly found on that. So now I have to come up with something of my own… For that I take you on this little trip that shows you how ‘easily’ it is nowadays to come up with something that even your wife may appreciate!

So I started drawing…

I gave it a first try crafting all of that by hand but the end result looked really ridiculous: the wife didn’t agree! With the right tools this is going to be so much better. Then I remembered there is nowadays a python plugin that can generate a complete drawing of a box for you so that you don’t need to worry about all the details anymore. And guess what: someone made a website for that! Boxes.py is the place to be.

I went for the BasedBox design, here are my settings:

But I wanted to add a little detail myself, I wanted to have a black front. Luckily the generated files from boxes.py are in the svg format which can easily be edited on linux using with Inkscape. An hour later (I’m not a very skilled Inkscape user) I got to my final drawing:

For the front plate I came up with following design:

Next step was getting it produced using laser cutting techniques. The guys at Snijlab.nl allow you easily upload your drawing in all sorts of formats. They have a wide range of materials that you can choice from, the website works really well and by the end of the route you get a final offer so that you don’t get any surprises when you order your stuff. Few days later the goods arrived at home and we could start assembling thins…

First I had to drill some holes in the bottom and back so that I could fix the PCB, power supply connector and user switch.

To fix the front plate without visual screws I also had to cut an aluminum plate on which we can glue the front plate:

And then comes the final stage: putting the box together:

I hope you like the end result, and I hope this inspires you to get building yourself. Good luck!

Using Gcam on a Samsung Galaxy S20FE for astrophotography

In a recent article I’ve explored the astrophotography skills of the Samsung Galaxy S20 FE smartphone. As it seems, it is quite possible, but on the other hand I’ve also stumbled upon pictures that show far more superior image quality, for example from Google’s Pixel phones. As it seems the Google Camera (GCam) application, although not installed by default, can still be installed through other channels. I’ve tried the modified version made by Wichaya. You can find it at celsoazevedo.com, go ahead and download the GCam_8.1.101_Wichaya_V1.5.apk file. There is also the morgenman-Wichaya-1.2-v2.xml config file available. I’ve installed it, but I’m not sure if it is really required. Installing the config file can be done through the application itself, no other tricks required.

So how about the result. Well I was expecting an noticeable improvement, but unfortunately that wasn’t quite the case. Here is a shot I took with the modded Gcam app:

Astro shot with Gcam app on Samsung Galaxy S20 FE

Now compare that with a picture that I took with Samsung camera app:

Astro shot with Samsung app on Samsung Galaxy S20 FE

As you can see the colors are quite odd using the modified Gcam app. I also don’t think that there are more stars being captured. So, for now I wouldn’t recommend using the modified Gcam. Still, if you have something interesting to share please feel free to do so in the comments section.

Astrophotography… on a smartphone!

The world of astronomy has always intrigued me ever since my father told me how you can spot the Great Bear (Ursa Mayor). The beautiful galaxies, nebula’s, planets, comets and such are truly amazing to look at and I’m always on the lookout for the newest set of photo’s released by NASA and ESA. Photography on the other side has also been one of my other interests. I still remember the days where I started exploring a Canon AT-1 camera to take picture of my Erasmus stay in Porto. The analog camera from the ’70’s required a set of skills to get the best of out of, but that’s what the process of capturing something so rewarding in the end.

Enter the 21st century. The world of photography has changed drastically ever since the ’70’s of past century. Cameras became digital, they could now also capture videos and they received a whole other bunch of new features that weren’t possible back then. Manufacturers switched over and started introducing compact and affordable cameras which made the people rapidly change over to this new form of photography. Over 15 years ago I got myself a second hand Sony DSC F828 digital camera which was manufactured in 2003.

Sony DSC F828

This 8MP camera had a bunch of manual settings such as manual focus, manual shutter time, manual ISO adjustment and such that allowed me to do my first few tryouts in astrophotography.

Orion (center) and Taurus (upper right), you can also spot the bright star Sirius (left bottom) and the Pleiades (top right)
Jupiter (left) and the Moon (right)
Moon

Afterwards the introduction of smartphones rapidly took over from the digital cameras. In the beginning the image lacked a bit, but that became better over the years. And you now could take pictures with your phone which you always had with you anyway the trend was set to fade out the market of affordable compact cameras.

Enter 2022. Personally I don’t know of anyone who still buys a digital compact for taking vacation and family pictures. The digital cameras have been eating dust for years now, and I’ve also been much impressed with the quality of pictures on my latest smartphone: Samsung Galaxy S20 FE.

Samsung Galaxy S20 FE

If there is one area that I always found lacking when taking pictures with your smartphone than it was astrophotography. However recently I got astonished after having a look on the internet at what some other are capable of capturing with their high-end phones. Some even got onto capturing the milky way! And so I became intrigued to give it a shot myself. Underneath you can find a few of my pictures:

Orion and Taurus above a tree in my garden
Orion and Taurus from a slightly different angle and using different capturing settings. You can also spot the Plejads open star cluster at the upper right corner
Ursa Mayor above the back of my garden

I was also able to capture meteor, often referred to as shooting star. The meteoor is part of the Perseids meteor swarm which is particularly active this part of the year (mid august).

Ophiuchus constellation with a Perseid meteor shooting by at the right side

Below I’ve made color optimizations to the previous picture to make it more clear what to look for:

A shot of a Perseid meteor passing by

In the end I’m quite pleased with the outcome of this pocket sized camera feature that is brought by the Samsung Galaxy S20 FE smartphone. I’ve been trying to do those same shots on some of my previous smartphones but the image quality was basically too low to spot any stars in the black void. Compared to the 2003 tech Sony digital camera it seems we’re closing in on filling that gap. Maybe for the high-end smartphones that is already the case, who knows. The biggest drawback so far is the lack of zoom maybe, a telescoop mount would do much in my opinion. Also note that this picture is taken with Samsung’s default camera app. Google’s own camera app has some more advanced features for their Pixel phones especially targeted at astrophotography and you may find some pretty nice results from that on the web. With that I’m hoping you start to do your experiments of your own, feel free to share your results! Finally here is what can be achieved with a higher-end smartphone medio 2022:

Milky Way captured by Google Camera app using a Google Pixel 4XL phone, image courtesy of Google.com

New Year’s Eve party told through CO2 levels

During a previous article I’ve added CO2 level monitoring to my Home Assistant setup by using the SCD30 NDIR CO2 sensor. Although I haven’t tested a huge amount of air quality sensors I still found the level of accuracy of the SCD30 quite good. But how good is “good”? Let me showcase that by looking at the CO2 levels I’ve recorded through New Year’s Eve.

(click to enlarge)

For starters the SCD30 air quality sensor is installed in the living room at the back of our TV. We started recording CO2 levels at noon (t0). The VASCO D350 ventilation unit is at “low-speed” mode and we had the front door open regularly. During this period only 3 people where inside the house making New Years’s Eve decorations. We notice how the CO2 levels build up until it saturates at around 1000 ppm. Around 17h our first guests arrive (t1). We can easily spot that event since from that moment on CO2 levels start to rise rapidly. After all of our guests had arrived and we all had our first couple of drinks it came to be that we were quite packed (we were with 13 in total). Without even looking at the CO2 levels I decided to ramp up the flow rate of the VASCO ventilation unit (t2). The above charts show that it wasn’t a bad decision to make since at that time the CO2 level had risen up to +2800 ppm. Due to the increased air flow this level dropped back quite a bit and after a while we reached acceptable levels again. However in the “medium-speed” mode the VASCO D350 produces quite a bit of noise in our sleeping rooms because it is installed on the same floor relatively close to them. At 21h15 (t3) my wife decided to put it back in “low-speed” mode since the youngest of our company were put to rest. As confirmed in the above chart the decreased air flow allows for CO2 to build up again. A bit later (t4) we started cooking. We regularly had one of the windows open, but also the kitchen’s hood was on. In effect our living room (which includes our open-space kitchen) is better ventilated and again this is confirmed by the SCD30 since CO2 levels start to drop. After cooking has finished (t5) the windows were kept closed and the kitchen’s hood was also turned off again which leads to increasing CO2 levels for the remaining of the evening. At some moments the CO2 levels even reached unacceptable levels again. Now after midnight has passed you’ll notice a small dip in the chart (t6). It is not some strange kind of artifact but can easily be explained: at that exact moment we went outside for few minutes to watch some of the fireworks around the neighborhood. Also one of the living room’s sliding windows was kept open and as a result CO2 dropped immediately with tens of ppm. Finally at 1h15 (t7) the eldest of the children were ready to catch sleep and all of our guests went home at that moment. This is easily detected by the SCD30: it shows us how the CO2 levels start to drop again. Me and my wife cleaned up a bit and soon after went to bed. At this moment the living room is no longer inhabited so no new CO2 is added. The VASCO D350 has free play and slowly – remember it’s at “low speed” mode – but surely brings our living room air quality back to acceptable levels.

As you can see the CO2 readings from the SCD30 are accurately enough to catch certain events that happened throughout the evening. Combining that data with other data such as the ventilation unit its flow rate we could probable create some software that could guess the amount of people inside the living room. For now I’m not convinced it is accurately enough to guess the exact amount of people because there are too many other variables involved (such as keeping a window open) that are not being monitored.

As a conclusion I’ve learned that when we have people over at our place we should give extra attention to improve the air quality. From the collected sensordata I could easily spot moments where the CO2 value reached unacceptable levels. To automate that process of constantly monitoring the CO2 level and adjusting the ventilation unit its air flow I could look into hooking up the VASCO D350 into Home Assistant. That may be something I try to accomplish later in 2022. For now cheers and best wishes to all of you.

Building a HA wireless air quality sensor with zero code

Few months after installing a ventilation unit that regulates the air quality inside the house I’m now at a point to review this “upgrade”. Personally I didn’t notice any effect on my breathing, getting less sick, getting less tired or anything else that could be related to breathing “clean” air. The only thing I did notice is that the ventilation unit produces quite a bit of noise: my house isn’t quiet anymore at night. I wanted to get to know a little bite more about its effects so I starting thinking of ways to measure the air quality.

The theory

As it seems the most important indicator for the indoor air quality is defined through the Carbon Dioxide (CO2) level. CO2 is a colourless gas that contains 2 oxygen atoms (double) bounded to one carbon atom. Although the molecule isn’t considered poisonous and may not look so different than the oxygen molecules (O2) that we need to breath in order to survive, it is however unhealthy to breathe-in high levels of CO2. Levels of 1% (10.000 parts per million – PPM) will make you feel drowsy, and at 7-10% you’ll start to suffocate, feel dizzy, notice a headache and you may also receive visual or hearing dysfunctions, all within few minutes until a few hours. As the NASA reports, even being exposed for an 8 hours period to levels of 5000 ppm could result in headaches, sleep disorder, emotional irritation and so forth. Nowadays it is generally accepted that values below 1000 ppm are considered ok to live in, but that you should ventilate as soon as that level is exceeded. For values above 1000 ppm ventilation is recommended.

Values below 450 ppm are considered very good since in many occasions this boils down to the outdoor CO2 level. Before the industrial revolution began that value was even lower! Given all of that we now have good idea what values to compare too. One more note: CO2 weighs roughly 50% more than dry air. In effect carbon dioxide is best measured lower to the ground. Don’t place your sensor against the ceiling!

Next I started looking for sensors. Most often I found that the best quality sensors use the so called NDIR sensor technology. A NonDispersive InfraRed (NDIR) sensor is a small spectroscopic sensor. I agree if you find that to be a whole lot of complicated words. I won’t go too much into detail here, but the ways it works is as following. A infrared light source is used to send IR light through a sample chamber into an IR detector. Parallel with that a second beam of light is send through a reference chamber typically filled with nitrogen. Because gas composition influences the absorption of light and as the composition is different in both chambers, the IR detector will also pick up these differences. The reference chamber always contains the same composition and is therefore very suitable to check for changes in composition of the gas in the sample chamber. More detailed, each molecules is also known to absorb light which is only within a given part of the light’s spectrum. For example CO2 molecules absorb light the best when using light with wavelengths of around 2,7μm, 4,7μm or 13μm. Using specific LEDs (such as IR LEDs) and light filters these specific wavelengths can be obtained which allows the NDIR sensor to “sense” a specific molecule or set of molecules.

Daniel Popa and Florin Udrea – “Towards Integrated Mid-Infrared Gas Sensors”

The Sensirion SCD30

During my hunt for sensors my news feeds caught up on me as I received a newsletter promoting the Sensirion SCD30. Diving into various open-source how-to’s I noticed how this sensor, while not cheap to buy, is often respected for offering decent C02 measurements. The Sensirion SCD30 uses the NDIR technology, is widely supported through various libraries, and on top also measures temperature and humidity (as a side effect of sensor-correction). The decision was made, my wallet shrunk a fair amount of money worth more than a few beers, however in replace I received this brand new sensor which will from now on report how healthy the indoor air really is.

Specifications:

  • NDIR CO2 sensor technology
  • Integrated temperature and humidity sensor
  • Best performance-to-price ratio
  • Dual-channel detection for superior stability
  • Small form factor: 35 mm x 23 mm x 7 mm
  • Measurement range: 400 ppm – 10.000 ppm
  • Accuracy: ±(30 ppm + 3%)
  • Current consumption: 19 mA @ 1 meas. per 2 s.
  • Energy consumption: 120 mJ @ 1 measurement
  • Fully calibrated and linearized
  • Digital interface UART or I2C

From these specifications, notice how the SCD30 is specified for operation in the sub 10.000 ppm range, comes with an accuracy of roughly 30 ppm, and has temperature / humidity compensation on-board: perfect for in-door CO2 level monitoring.

Interfacing

The SCD30 can be interfaced in few ways. You can either use I2C or UART (with Modbus protocol). These interface modus are handy to adjust configuration options such as the sensor sampling interval, temperature offset, self-calibration and many more. For those who like to operate it without any of these data interfaces can also interface through the for PWM mode. Once the SC30 has been configured using either I2C or Modbus you can get the sensor value by evaluating the signal on the PWM pin. The benefit here is that you need only one pin to interface the SCD30, the configuration can happen during manufacturing. The downside is that you’re less flexible in ways of using the sensor plus you’ll be limited in reading CO2 levels only.

Calibration

Due to how NDIR sensors work they’re delicate to use and subject to mechanical stress, shocks, heating and other environmental influences. This implies that sensor values may show serious deviations over time. Because of that the SC30 requires sensor calibration in order to keep the sensor value within the specs. Sensirion states that you can expect a typical annual drift of around +/-80ppm when no calibration is performed. There is no real recommendation when calibration should be performed because it depends on your required accuracy to determine re-calibration intervals. Because for indoor usage we’ll be mostly measuring in the range of 400-1000 ppm so having a deviation of 80 ppm annually I’d suggest for our case that calibration should at least happen twice a year.

There are 2 ways of calibrating the SCD30: Forced Re-Calibration (FRC) and Automatic Re-Calibration (ARC). During the forced and automatic calibration process the same reference value will be set. The reference value is used internally to adjust the calibration curve which restored the sensor accuracy. The way the sensor output value is manipulated and corrected is always the same, the way the reference value is set is however depending on the calibration method. Once the reference value is set it is also stored in non-volatile memory and will persist until a new reference value is set.

With Forced Re-Calibration (FRC) the user has to provide the reference value manually using the I2C or modbus interfaces. It is crucial to provide a good reference value. You can either use a second calibrated sensor, or expose the sensor to a CO2 controlled environment with stable and known CO2 level, or by exposing the sensor to fresh outside air (=400 ppm). Keep in mind that the supplied calibration value need to be between 400 and 2000 ppm and that the sensor must have been operated for at least 2 minutes in “continuous mode”. More on that mode later on.

With Automatic Re-Calibration (ARC) the sensor automatically generates the reference calibration value by monitoring and analyzing the CO2 levels it measures. The algorithm focuses on measuring lowest CO2 level multiple times, which it can then use for calibration. The upside is that the firmware doesn’t need to perform the calibration process, the downside is that the sensor has to regularly see CO2 levels of fresh outdoor air (=400 ppm). According to the datasheet this means that it needs to see “fresh air” for at least 1h a day. Inside buildings this can be achieved by well ventilating the room/building whenever humans are not present. It also implies that the sensor is operated in “continuous mode” all the time. Furthermore when using the sensor for the first time it needs roughly 7 days before reaching its calibration value. And note that the sensor has to be power continuously, which may have a big impact on battery life if that is your source of power.

Modus operandi

The Sensirion SCD30 can operate in “continuous operation“. In this mode the sensor will automatically poll itself at an user-defined interval. The interval can be set through the command interface, and the chip will raise its data-ready pin whenever data is ready to be read. In between sampling the chip’s power consumption is reduced so you may want to adjust the sampling rate according to your needs. This part is further discussed near the end of this article. The benefit with continuous mode is that it can optionally handle the calibration automatically through the ARC process. All together this makes that the SCD30, once setup, only requires from an outside chip to readout the data whenever it is available, which is very handy from a programmers point of view. That aside you’re also able to not rely on ARC and rather run forced re-calibration manually, while the sensor is still collecting data in continuous mode. After power cycling the sensor it will automatically resume to operate in continuous mode if that is how it has been setup. Keep in mind that continuous mode requires 1-2 minutes to stabilize the readings.

If you want you can also stop the continuous operation. The documentation isn’t exactly clear on how this mode is referred too and how the sensor behaves. Through Sensirion Support I came to understand that when continuous operation is stopped the sensor’s value is not expected to be updated anymore. You’d need to start continuous mode again for capturing new sensor values. Unfortunately stopping continuous mode doesn’t deactivate the detectors so it will not reduce the power usage. All together this makes that there is little reason to deactivate the continuous operation and that also why Sensirion is advising against it.

Integrating the sensor into Home Assistant using the ESP32 and ESPHome

I don’t think Home Assistant needs any introduction here, it’s a very popular option for building your own free open-source domotics and automations system. The ESP32 is very well known too, its powerful dual-core processor and integrated Wifi chip allows for easy interfacing within your home network. ESPHome is software that exists of 2 things: a firmware that covers all sorts of sensors and that you can integrate using a simple yaml file without needing to write any line of code, and a Home Assistant addon that let’s you manager your ESP32 wifi nodes and their configuration. What makes ESPHome so handy is that it can already handle our SCD30 sensor, therefore only minor configuration needs to be performed of the firmware settings. Once the firmware is deployed, the sensor will automatically become available in Home Assistant.

By default the sensor samples each 60 seconds. The sample rate can easily be adjusted using the update_interval setting. The SCD30 is by default also running in continuous mode and performing ARC (auto-calibration). For description of all sensor configurations look here.

Here is how I’ve configured the ESPHome firmware for building the wireless CO2 sensor:

esphome:
  name: air-quality-sensor-test
  platform: ESP32
  board: esp32dev

# Enable logging
logger:

# Enable Home Assistant API
api:

ota:
  password: "*******************************"

wifi:
  ssid: "telenet-5A11733"
  password: "********"

  # Enable fallback hotspot (captive portal) in case wifi connection fails
  ap:
    ssid: "Air-Quality-Sensor-Test"
    password: "********"

captive_portal:


i2c:
  sda: 21
  scl: 22
  scan: True
  id: bus_a
  
sensor:
  - platform: scd30
    co2:
      name: "Slaapkamer CO2"
      accuracy_decimals: 1
    temperature:
      name: "Slaapkamer Temperature"
      accuracy_decimals: 2
    humidity:
      name: "Slaapkamer Humidity"
      accuracy_decimals: 1
    address: 0x61
    i2c_id: bus_a
    update_interval: 120s

The first time you flash the ESP32 you need to do that using the ESPHome-Flasher utility and a UART to USB converter. See below for a screenshot of the utility in action.

Afterwards the ESPHome firmware and Home Assitant integration is able to perform firmware updates automatically. Note that firmware re-configuration, for example to adjust the sampling rate, actually requires to recompile the firmware and redeploy it into the ESP32. That’s where the HA addon for ESPHome comes in handy. It performs these steps automatically for you, all you need to do is adjust the yaml configuration and hit “save” and “install“.

Wiring the sensor is not complicated at all and takes only 4 wires as you can see below. For a pinout of the ESP32 DevKit I’m using I’d suggest visiting the circuits4you webpage.

Now powerup the ESP32 and SCD30 sensor. The device should report new sensor values automatically in Home Assistant. Here is a capture of the sensor in HomeAssistant:

Making it truly wireless

While we’re already achieved our goal, the one thing that is still limiting us from having a truly wireless solution is that we need to keep it powered all the time using a 5V cellphone charger. This got me wondering how the performance would be when running it from batteries. I noticed the LilyGO T-Energy module combines the ESP32 with a socket and charging circuitry for 18650 lithium batteries. This board is an excellent candidate for any ESPHome battery powered sensor since it provides all the components you need for battery operation: you only need to hook up the sensor and setup ESPHome to handle it.

Here is how I got it wired up:

There is nothing particularly different to how I got the SCD30 wired to the ESP32 DevKit that I used earlier, the GPIOs for I2C operation are the same it’s just that they’re laid out differently. The LilyGO T-Energy also comes with a battery voltage feedback circuit routed to GPIO35 which allows to monitor the battery. This will certainly come in very handy during my little experiment.

At this point I’ve only slightly adjusted the configuration so that we support the battery voltage monitoring, and I’ve also added extra status feedback functionality to the blue “user” LED at GPIO5. Since the T-Energy board doesn’t have a power LED (remember it’s focussed on low power usage, you don’t want a LED to drain your batteries) I thought this may come in handy as a visual feedback in cases something goes wrong.

esphome:
  name: wireless-air-quality-sensor
  platform: ESP32
  board: esp-wrover-kit

# Enable logging
logger:

# Enable Home Assistant API
api:

ota:
  password: "******************************"

wifi:
  ssid: "telenet-5A11733"
  password: "*******"

  # Enable fallback hotspot (captive portal) in case wifi connection fails
  ap:
    ssid: "Wireless-Air-Quality-Sensor"
    password: "************"

captive_portal:


status_led:
  pin: GPIO5
  id: blue_led

  
i2c:
  sda: 21
  scl: 22
  scan: True
  id: bus_a
        

sensor:
  # battery
  - platform: adc
    pin: GPIO35
    name: "Wireless CO2 sensor battery voltage"
    update_interval: 60s
    attenuation: 11db
    filters:
      - multiply: 1.73
    
  # CO2 sensor
  - platform: scd30
    co2:
      name: "Slaapkamer CO2"
      accuracy_decimals: 1
    temperature:
      name: "Slaapkamer Temperature"
      accuracy_decimals: 2
    humidity:
      name: "Slaapkamer Humidity"
      accuracy_decimals: 1
    address: 0x61
    i2c_id: bus_a
    update_interval: 120s
    temperature_offset: 1.5 °C

I’m not naïve to believe the result will end up to be a good solution. Both SCD30 sensor and ESP32 with all power circuitry are fully alive and draining the battery with 10s of milliamps continuously. But it’s a starting point from we can improve.The test I’ve performed involves fully charging a PKCELL 3.7V ICR18650 2600mAh lithium battery and then disconnecting the mains power so that the T-Energy boards runs entirely on its own power source. Now we leave the device running until it runs out of battery power. Here are the test results:

  • Battery voltage @ start: 4.12V
  • Battery voltage @ end: 2.64V
  • Discharge time: 42 hours 25 minutes

As expected the battery is drained pretty quickly: we’re running out of juice in less than 2 days! Because I’ve added the battery monitoring sensor I noticed the device kept running until the battery reached 2.64V. Many people may consider this as harmful and it is suggested to protect the battery from not discharging it that much. When examining the discharge curve from the image below we can conclude that there is indeed a tipping point around 3.2V, and if you cross that point by draining more energy the battery very quickly goes from “okay to work with” to “flat out dead”. As it seems to me there isn’t much use in allowing the battery to go below that 3.2V level, you certainly don’t want to risk damaging the battery for that few minutes of extra lifetime.

One other thing we can conclude here is the average power consumption of our device. I haven’t used a real measuring device, so it’s actually an estimation based upon the battery’s capacity and the time it took us to use all of that. Basically we used the 2600mAh capacity in a period of over 42 hours, so we divide the 2600 by 42,5 and get the current that is drawn continuously:

  • Estimated average power consumption: ~61mA

While estimations are never correct, this test easily shows us that the device isn’t performing well on batteries. As I expected earlier, keeping the entire device alive draws far too much energy for battery powered solutions. Some tweaking is required to reduce those figures.

Lowering the power drain for better battery operation

The Sensirion SCD30 is made out of 3 main components. A microprocessor, an IR emitter, and an IR detector. This is particularly interesting since all components need to be taken into account when looking for lowering the total power usage. Sensirion states that when the sensor is running in continuous mode, the sampling rate will make a big impact on the power consumption. During sampling all 3 main components need to be powered and hence the power usage will be high. However, in between collecting samples the IR emitter and microprocessor are not used and will not draw any current.

Given that, highering the sampling rate will increase the total power consumption, and lowering the sampling rate will reduce that. So to obtain better battery performance the quickest solution on the sensor’s side is to decrease the sample rate.

However, in effect the response time also changes: higher samples rates reduce the response time. But why is that response time so important? The response time describes how a sudden change in CO2 level is reflected in the sensor readout value. For example, when a CO2 level change from 4000 to 6000 ppm occurs you’ll be able to read that value within 40 seconds when using a 2-5 second sampling rate. When you increase the sample rate to 60 seconds you may have to wait several minutes before the sensor will reflect that actual CO2 level. You could see it as sensing latency. Here is a chart covering how both need to be taken into account when defining the sampling rate:

One important thing to note here is that setting the sample rate to larger than 15 seconds will not make a big impact on average power consumption due to parts of the sensor still being powered. The minimal current draw is 5mA, which is not very great compared to the various sleep modes that can be achieved with various other sensors and microcontrollers. If you’re satisfied with an average power consumption of 5-10mA you may want to use the SCD30’s RDY pin to wake up your main application processor whenever data is ready for readout. The RDY is active low which means that when data is ready the voltage on the pin measures 0V. Compared to the estimated power usage we saw in our battery test earlier this may result in a considerable increase in battery lifetime. I’ve been experimenting with this but I found that the end result using ESPHome firmware wasn’t working out that smoothly since the RDY pin wasn’t behaving as expected.

UPDATE: later I found out that the ESPHome firmware wasn’t using the SCD30’s dataready register and “set measurement interval” command to retrieve data. Instead ESPHome used a software timer which accidently may or often may not run in sync with the SCD30’s measurement interval. When both timers are out of sync the RDY pin toggles on and off at unpredictable rate and the pin behavior becomes unusable for our purpose. I’ve made a pull request to assure that ESPHome is no longer relying on its internal timer but instead using the SCD30’s measurement interval alone, let’s hope it gets merged… UPDATE: the pull request was merged in the development branch and will soon be part of ESPHOME. With that modified firmware I’ve now repeated the above battery test. I’ve also setup the ESPHome deep sleep component which puts the ESP32 in sleep soon after a SCD30 sample has been collected. The ESP32 awakens automatically after 108s using a wakeup timer which gives it enough time to setup its connection to HomeAssistant (through Wifi) before the next sample (with 120s interval) is about to be collected. Here is the part of the configuration that I’ve changed:

sensor:
  # battery
  - platform: adc
    pin: GPIO35
    name: "Wireless CO2 sensor battery voltage"
    update_interval: 60s
    attenuation: 11db
    filters:
      - multiply: 1.73
    
  # CO2 sensor
  - platform: scd30
    co2:
      name: "Slaapkamer CO2"
      accuracy_decimals: 1
      on_value:
        then:
          - if:
              condition:
                api.connected
              then:
                - delay: 2s
                - deep_sleep.enter: deep_sleep_esp32
    temperature:
      name: "Slaapkamer Temperature"
      accuracy_decimals: 2
    humidity:
      name: "Slaapkamer Humidity"
      accuracy_decimals: 1
    address: 0x61
    i2c_id: bus_a
    update_interval: 120s
    temperature_offset: 1.5 °C

# power saving mode
deep_sleep:
  id: deep_sleep_esp32
  run_duration: 5min
  sleep_duration: 108s
  wakeup_pin: 
    number: GPIO32
    inverted: true

Here are the test results:

  • Battery voltage @ start: 4.10V
  • Battery voltage @ end: 2.67V
  • Discharge time: 138 hours

With the ESP32 in sleep most of the time and SCD30 now literally sampling far less than our previous setup we now see a big improvement in battery lifetime. The discharge time improved at least 3 times. The estimated average power consumption of our device is therefore greatly reduced:

  • Estimated average power consumption: ~19mA

This is still far from acceptable for battery powered solution and I feel there is still some headroom for further improvements. For example it doesn’t take very long to get connected over Wifi to HomeAssistant, the 12s margin I used was choosen to leave some headroom for those occasions where connecting is a bit slower. Furthermore I also found out that the SCD30’s internal timing is not very accurate and may wakeup the SCD30 multiple seconds later than expected. In effect the ESPHome is alive for far too long. So taking some lesser margins may turn out well for you, but also further increasing the measurement interval may have a positive impact on battery life.

As an alternative way to reduce power consumption even further I’ve been thinking of switching the power of the SCD30 totally. If you leave it in continuous operation (as advised) the sensor should automatically restart sampling using its configured sampling interval as soon as the power is re-applied. One side effect of cutting the power is that auto re-calibration (ARC) can’t be used anymore, so the ESPHome firmware will need to somehow handle that. And other thing that needs to be taken into account is that the sensor takes 1-2 minutes to stabilize its readings. The latter is the biggest show-stopper of all since it requires to keep the sensor powered for a considerable large amount of time. Say you’re set to collect CO2 levels each 3 minutes in Home Assistant, then power cycling the sensor will require you to wait for 2 minutes before the sensor values reach acceptable quality. This leaves us 1 minute that the sensor can be completely switched off. So the average power drawn during these 3 minutes is 2 x 6.5mA / 3 = 4.3mA. In effect you can reduce the power consumption only by a small part (compared to your sleeping ESP32) while you’d be needing to setup various automations to get it working. You can sleep even more, however know that the longer it takes for values to reach HomeAssistant, the longer it takes for automations to be triggered when the CO2 level reaches critical values. What we really should be doing is keeping the sensor and ESP32 sleeping for most of the time. In our case we would want to have them only active for 5-10 seconds at most. Doing that the average power consumption (for the SCD30) could be further reduced to (6,5ma / 6) / 3 = 0.361mA which is roughly 20x better than keeping the sensor powered all the time. Note that this is highly hypothetical, for now I haven’t found a solution to reach those values using ESPHome.

While Sensirion recommends waiting 1 to 2 minutes before using sensor data, I was curious how bad the results could be. So I setup a little experiment where I put the CO2 sensor in an isolated environment with the ESP32 hooked up to it. Then I power cycled the device and watched how the CO2 values changed over time while they actually shouldn’t.

The ESPHome firmware retrieves the sensor data and hands it over to my HomeAssistant setup. In HA I can then easily read the data and plot it using my office suite of choice. Below is a chart of that the sensor data. It includes the CO2 level in parts per million, and the temperature in degrees Celcius.

From this chart you can easily spot that the first value coming from the sensor is not very accurate. The second sample that we collected 6s after boot is far closer to the final value, but still not very accurate. But from there on things are getting more trustworthy. After 15 seconds we’re getting near, if you can live with some deviation this could be your sweat spot. If you want a little more accuracy you should be waiting a little longer: after 45 seconds the sample values are more or less stabilized. However, if you really want to go by the book: 1-2 minutes will provide the most accurate data. Also notice how the temperature is slightly increasing throughout the measurements. This could be due to internal heating of the sensor, but it could also be measuring the heat dissipated by the ESP32 that’s sitting close to it. In the end the temperature and humidity (not shown in the above chart) data is very trustworthy right from the beginning when the sensor gets powered.

With all that in mind, if you settle for a 15 seconds wakeup interval (and SCD30 sampling at 2s) combined with some smart ESPHome automations you could maybe be looking at an average power consumption of around 0.5mA or more (roughly guessed). That’s not particularly low and far from power efficient. If you would power it from a single rechargeable 3.7V lithium cell with a capacity of 2600mAh, we’d be able to run it for 5200 hours, which is about 216 days. That’s not taking into account any other losses caused by ESP32, power regulators, etc. Wild guess: basically you’d be recharging each 6 months… You may want to add some extra circuitry (or use a LilyGO T-Energy) to measure the battery voltage so that you can also monitor that part of your device, and have some automations setup that send an alert when battery voltage drops too low. Note again that all of this is highly hypothetical, and not exactly where the SCD30 is designed for.

Conclusive thoughts

The Sensirion SCD30 is a great sensor for measuring CO2 levels and integrating it in your Home Asssistant setup. It comes at a relative high price compared to some of the cheaper (but not true) CO2 sensors out there, but in return you get absolutely good quality and good support. I can highly recommend the sensor. If you’re looking for a battery powered solution the SCD30 may not be your preferred partner. It consumes a decent amount of power even when you’re following the design rules. Through some smart hacking you may be able to squeeze out better battery performance which may even last more than 1 month on a single charge, but don’t expect to run it throughout the year unless you’re packing it with a big sized battery pack or solar cells.

Benchmarking 10y old tech versus new tech

In this article I’ll like to make a comparison of the technology from 2011 against the offerings released this year. As it happened roughly 10y ago I obtained a Dell XPS 15 (L502x) which was quite a good buy given its price. For what I can remember my particular model was sold at around €1000 and offered a decent (but not yet apple-like) design, a very good FullHD LCD, very good JBL speakers, good keyboard, plenty of connectivity and on the performance side everything was there for what you suspected from a unit sold in this segment: core i5, large HDD, DDR3, and and entry-level Geforce GPU. Throughout the years I had to perform some changes and upgrades to keep up with the ever evolving tech. For example I’ve completely ditched the Windows OS (which was still at version 7 back then) for Ubuntu Linux. On the hardware level I moved from using an HDD to a SDD, and I’ve upgraded the DDR3 RAM from 4Gb 1333MHz  to 16GB 1600MHz. Oh, and I’m already wearing the 3rd battery in this one.

But now 10 years later things start to get a little laggy. Most long running tasks such as compiling are still okay, but on the web things have also evolved and I’ve noticed that on that front my system regularly starts to heat up and spins its fans because it is trying to run whatever kind of animation or is loading a bung of javascript files. Many times I’ve bing very critical at how the web is evolving because it isn’t always going into the right direction. Maybe it’s not good the generalize here, but while I’m typing this article my CPU as at temperature of around 60°C and regularly the sound of the fans spinning up is getting quite a bit of an annoyance. So while my system is still okay for most of the things I use it for, I’m starting to get the idea that maybe with newer tech things may be a bit more comfortable. It also doesn’t help that the battery live is once again largely degraded and that the power plug can seen many accidents since my little kid came into my live.

I’ve thinking of an upgrade for some months now. I must say that with AMD’s push on the mobile front with Zen2 and Zen3 really got my attention. So when recently I noticed some shiny new Lenovo Legion 7 laptops were available (availability is kind of an issue nowadays) I didn’t hesitate to order one. But as I came to discover the support for linux on this things was far from great, I came to conclude that I’m not really satisfied with the new machine. Nonetheless the legion 7 is really fast and I did took it through some tests just to see how PC hardware has progressed over those 10 years.

Here is how they both line up:

DeviceDell XPS 15L502XLenovo Legion 7 16ACHg6
CPUIntel Core i5-2410M (2C 4T @ 2.9GHz)AMD Ryzen 7 5800H (8C 16T @ 3.2GHz)
Memory16GB DDR3 1600MHz32GB DDR4 3200MHz
Disk250GB Samsung 850 SSD + 500GB WD HDD1TB SKHynix SSD
GraphicsIntel HD3000 + NVIDIA GeForce GT 525MNVIDIA GeForce RTX 3060
LCD1920×10802560×1600
OSUbuntu 20.04 LTSWindows 10 Home (19042)

Back in 2011 the trend was to have 2-4 cores and 4Gb -8Gb of RAM where nowadays you’ll mostly have 4-6 cores in the same segment or even more combined with 8-16Gb of RAM. Also the speed of RAM has doubled (in data rates, but not latency), and while disk space hasn’t changed that much in capacity, it did get a whole faster because SSD’s are now much lower priced compared to back then. On the GPU side of things do note that the Lenovo is targeted for the gaming market so it isn’t really a good comparison. But performance wisee even mid-range GPU’s from nowadays will easily double the performance compared to mid-range GPU’s from 2011.

Some benchmarks are up next. I’ve used the Phoronix Test Suite as a means of measuring both systems running a totally different OS. You find the results here:
https://openbenchmarking.org/result/2105088-HA-2105076HA73
https://openbenchmarking.org/result/2105086-HA-2105074HA41
I must apologize not running any more tests, I don’t have that much time for evaluation left.

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python’s average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

In pybench the per core performance on the Ryzen isn’t really that great compared to the Intel Core i5. But note that we’re running on different OS’s here, probable that is much to gain when running these benchmarks on linux. However once we toss in the multi-core scores the Ryzen takes back the lead even knowing it’s running on a for from ideal software stack.

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

The Git benchmark shows quite similar results. The gain from going to Ryzen 7 roughly double performance but know that the per core performance is really weak probable due to some OS’s support.

BLAKE2

This is a benchmark of BLAKE2 using the blake2s binary. BLAKE2 is a high-performance crypto alternative to MD5 and SHA-2/3. Learn more via the OpenBenchmarking.org test page.ResultOpenBenchmarking.orgCycles Per Byte, Fewer Is BetterBLAKE2 20170307AMD Ryzen 7 5800H – NVIDIA GeForce RTX 3060 LaptopDell XPS 151.12052.2413.36154.4825.6025SE +/- 0.00, N = 3SE +/- 0.00, N = 34.504.981. (CC) gcc options: -O3 -march=native -lcrypto -lzPerf Per CorePerf Per ThreadPerf Per ClockResult Confidence

The Blake benchmark shows even better how the Windows platform is far from ideal. At full speed the Ryzen 7 system is barely faster than the 10y old tech.

CacheBench

This is a performance test of CacheBench, which is part of LLCbench. CacheBench is designed to test the memory and cache bandwidth performance Learn more via the OpenBenchmarking.org test page.

In Cachebench we also notice that the Core i5 performs very well given its age: the single core performance is great! However whenever we go multicore the AMD Ryzen 7 easily takes the lead and outperforms the older Intel CPU by far.

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

Stockfish really shows the potential of AMD’s potential on the mobile market. Even the single core scores are roughly doubled, while on multicore scores show a real massacre.

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

The massacre continues with C-Ray… Here the single core performance is also 2-3 times better than the Core i5, while the multicore scores show a improvement of over 10 times!!

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

The FLAC audio encoding shows another big win for the AMD team. However this time the gain is for less great as seen with the C-ray and Stockfish test.

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

The results in the LAME encoding test show again a big gain from upgrading to AMD Zen3. While the single core performance already shows a nice speedup, it becomes a total carnage when the full power of the Ryzen processor is unleashed.

As a conclusion I think we can easily judge that while hardware gains over 1 generation of processor may not always show great improvements, it becomes clear that things really accumulate over the years and in the end really pay off when reviewing that over a larger period of time (such as 10 years). The 2021 AMD Ryzen 7 processor really outperformed the 2011 Core i5 by far in most tests.
Do note that some other tests also showed far lesser speed improvements, and in some tests the Core-i5 even outperforms the new-tech Ryzen CPU. Probable this is because the test is more IO bound, or that the test runs far less well on Windows system. Either way this clearly states that in situations where the test is not really CPU/GPU bound the XPS isn’t that bad either. From my perspective I’m pleased with the performance upgrade, however the lacking support for Ubuntu Linux does disappoint and may at least for me be the defining factor if I keep this laptop or not.