The Lenovo Legion 7 is a laptop specifically designed for the gaming market. It packs a beefy AMD Ryzen 7 processor with a NVIDIA GeForce RTX card and up to 32Gb of DDR4 RAM. It combines a decent alu housing with shiny RGB LEDs which puts this laptop somewhere in between a more flashier categorie of gaming laptops such as those of Alienware/MSI, and more subtle designed ones such as the Lenovo Ideapad. The RGB LEDs are user controllable, Lenovo decided to make some cool animations with them by default which will certainly draw the attention of everyone who’s with you in the room. Those such as I who want a more subtle appearance may choose to turn of the RGB’s. With that, but also because of the larger venting grills, this portable suddenly becomes much more of high-end workstation.
The specific model I’ve over here is the Lenovo Legion 7 16ACHg6. It has following specs:
With a target price of around € 1800 that’s a lot of computing power for less than 2k Euro and also less than many other so called workstations. By default it is also equipped with Windows 10 Home which is probable the best option for those considering using it as a gaming machine. My goals however is to use it for compiling various C/C++ projects, linux kernels, embedded system images and so forth. While Windows can also do that job, I’ve become more a linux fanatic over the years so I decided to give Ubuntu a spin for it.
This device was officially announced only few weeks back (March 2021) and is so to speak still arriving at the stores near you. Available may be troublesome so when I saw one available I decided to get one without hesitating.
Onto Ubuntu. I mostly favor the LTS releases because of their stability. However with hardware this shiny and new my hopes wheren’t high that everything would be working out well so opted to go for the recently release Ubuntu 21.04. For what I can so far tell the OS runs very smooth. However I did found some glitches that’re probable related to the recently introduced Wayland compositor. I’m using NVIDIA’s proprietary graphics drivers and animations run butter smooth, but off course due to the NVIDIA RTX gpu that was to be expected. WIFI, keyboard, USB, touchpad,camera is working out-of-the-box. At this stage I’ve bumped into 2 major problems. One is that the display brightness cannot be controlled. It’s fixed at 100% which is far from ideal in late evening hacking sessions. As it appears from a topic on askubuntu is seems to be related to a BIOS issue. The linux ACPI driver is not able to find the [\_SB.PCI0.GP17.VGA.LCD._BCM.AFN7] symbol, for some reason the BIOS is not defining that hence linux is not able to use it resulting in the backlight not being able to control.
Also audio playback is not working well. At least not when the speakers are the output device. When you plugin your earbuds or use Bluetooth everything plays well. Lenovo is using the Realtek ALC3306 audio codec. The kernel enablement can be found in /sound/pci/hda/patch_realtek.c. There are topics on github and bugzilla.kernel.org that cover this issue on similar laptops. According to Jaroslav Kysela Lenovo is using amplifier chips for the integrated speakers on recent hardware which must be initialized too. Much of that is undocumented.
My conclusion: the Legion 7 is very decent machine with great value for buck. It is advised to keep the OS to Win10. Linux fanatics better stay away from this machine: on linux we notice mayor problems such as backlight control and audio-out through its speaker tat are not being addressed.
We’re now roughly 4 months after the day that the Raspberry Pi Foundation launched the highly anticipated Compute Module 4. Launch day already included some carrier boards that were in development, so now may be a good time to look back and see their availability, plus we’ll also look for any other great boards that are in development or may have hit the scene.
Off course there is always the official carrier board of the Raspberry Pi foundation. If offers great connectivity at a very low price. It’s unsurprisingly also one of the most favorable boards out there.
The highly anticipated Raspberry Pi 400 features a CM4 module within a keybpard housing, making it a full blown desktop at no cost at all regarding space on your desk.
This board comes in 2 flavors, one with an Google Coral AI TPU, and one without. The board is not a real carrier board but acts more as a converter to the CM3 DIMM connector.
Very decently packed dev-board which exposes the traditional RPI pins, plus dual PI camera connectors, HDMI out, touchscreen connectors, PCIe M2 connector, usb,, usb device, reset button, user button, Gigabit ethernet, console over USB.
This board is specially design for the computer vision fanatics. It’s focussed on small size, while still offering all the goodies you wish for object recognition and other sorts of vision and ai applications. It features Power over ethernet and a Google Edge TPU ML chip
With a focus on rovers and robotics, this board features dual raspberry Pi Camera connectors, Serial console over USB, USB Type-C power delivery, STMicro STM32H753 MCU, Pixhawk GPS, analog power, RC, and CAN connectors, 8 PWM outputs, accelerometer, magnetometer and gyroscope, barometer and a Google Edge TPU.
This board is focussing on compact dual camera use cases. It comes with dual Pi camera connectors, Gigabit ethernet, 2xUSB, USB-C, power switch, microHDMI, microSD, and various status LEDs.
The Modberry CM4 comes in 3 flavors: mini, standard, and max. The mini version offers a 2x eth, USB, RTC, 4 Digital Inputs, 4 Digital Outputs, 1-wire, one RS232/485 port. Standard adds another RS232/485 ports, 4 extra DIO, 1x PCIe, and optionally HDMI. The max version adds another PCIe, 4x AD converters, optionally CAN and HDMI. Furthermore Techbase also has a wide base of extension boards
Aiming for clustering use cases, the Turing Pi offers up to 4 CM4 slots all handled by an internal layer 2 switch. There is also room for 2x M2 PCIe SSDs, 2x ethernet, HDMI, audio, 4xUSB, DSI, I2C, fan headers and 2 SATA3 ports.
Like the Turing Pi 2, the ClusBerry CM4 is focussed on clustering applications. The ClusBerry 9500-CM4 supports up to 8 cluster modules. Each such clustering module has different purposes: The Standard cluster module packs an CM4 module with a IO controller (DI, DO, 1-wire, RS232/485, CAN), wired/wireless communication (1/2x eth, serial ports, LTE-cat.M1, 4G, 5G, LoRa, ZigBee, Z-Wave, Wireless M-bus) and an AI gateway (Google Coral AI). Other module throws in more features such as NAS file server(2x/4x SATA3 and RAID), USB3.0 hub, Gigabit LAN/WLAN router, SuperCap power management (sort of UPS) and more expansion boards with DIO, AIO, serial ports, sensors, etc.
A ready to go device that aside of the CM4 also features a Google Coral AI processor. It’s housing is industrial grade DIN rail ready. It also has supercap backup support.
The Pi-oT 2 is focussed on automation use cases. It nicely finished housing offers access to 4x 24V Digital Inputs, 6 x 50v 500mA Digital Output (open collector), 8 channel analog inputs, a RS-485 port and ethernet. The housing also includes a LiFoPO4 battery pack based UPS that’s able to run the PI for up to 2 hours! While technically not using a CM4 module, it well could have been.
While many other boards are focussing on industrial use cases, clustering or computer vision, this Wiretrustee board focussed mostly on providing a solid storage experience. It features 1xGb Eth, HDMI, 2x USB 2.0, microSD, USB-C and up to 4x SATA (incl. power)!
This carrier board bring the CM4 into roughly the same form factor as the normal Raspberry Pi4. Feature wise there is nothing special here, though it is probably the cheapest carrier board out there at the moment.
The goal of this board is to finally bring the Raspberry Pi into micro-ATX computing form factor. It should allow people to use all sorts of mini-desktop housing for their favorite ARM based processor board.
This little board is a bit like the PiTray, but instead also comes with a M2 connector on the rear so that you can pack it with you M2 SSD of choice. It also comes with a Google Coral chip and it has the Arduino UNO R3 headers instead of the normal Pi headers.
This industrial grade board features all sorts of IO that you’d expect from an industrial (edge) device. Adverticed as the “compatibility king” the board supports flexible power input (7.5-28V), a M2M connector for SSD, LTE, GPS Coral modules etc., 3 USB-A ports, 1Gbit eth (PoE), 40 pin Raspberry Pi GPIO header, HDMI (full-size), MIPI camera and display ports, mucroSD slot, USB-C (OTG)
Another board targeting the industrial market, the CM Hunter features isolated CAN, RS485 and 1-wire buses over most other boards we’ve previously discussed. They also offer an RTC, 10A relay, fan on-board fan connector, support for a SPI touch display and their own branch of Raspian.
The SeeedStudio board is the first board to feature dual gigabit ethernet ports. It uses the Microchip LAN7800 USB3 to Gbit Ethernet Controller, combined with a compact size, Micro HDMI, MIPI ports and USB-power makes this well suited for building a software LAN switch, compact Media Center to use it in one of your camera projects.
This is another board with features dual Ethernet ports, one 1000Mbps, the other 100Mbps. That aside, the board is also very feature-full. It comes with dual USB2 ports, HDMI, USB-C, USB OTG, SIM nad optional voice. The board can be delivered with a 4G LTE modem of choice, and can even be bought with a nice industrial looking housing. All together this combo makes it a nice product that should help you dive into the realms of wireless LTE connectivity. Only downside is that the board at this moment
This CM4 based device is still very much in development, hence it was merely released as a teaser for Pi-day. For now Onlogic seems to target industrial customers with its DIN-rail housing. Yet, from its looks so far it may also find its ways into places and use cases. The device will feature dual Gbit Ethernet, 1x RS232/422/485, 1x Micro USB OTG, 1x Micro HDMI, 3USB ports (of which one is USB 3.1) and M.2 SATA storage.
The CutiePi Tablet is an off-the-shelve solution for open-source linux tablets, and now features the more powerful CM4 compute unit. The CutiePi combines the CM4Lite (BLE/Wifi) with a 1280×800 8″ LCD, a 5MP rear-facing camera, USB-A and USB-C, micro HDMI, microSD and 5000mAh Li-Po battery. The OS is not surprisingly the Raspberry Pi OS but features a custom QT based shell.
The list so far is pretty impressive considering all the different use cases that’re being covered, and more boards are added each month. Availability is still something that will need to improve, but with the CM4 product launch just few months behind us it’s already very pleasing the see where are the creativity is taking us. A big thumps up for the various people out making this all come true. PS: if there would be any board missing from my list, please let me know, I’d be happy to add it!
I’ve spend some time building my own Linux distro using Yocto, and now I’ve come to the point where I want to update my devices remotely. For this purpose there are a few solutions available such as swupdate, Mender, RAUC, os-tree etc.
My choice went out to swupdate since it’s more of a framework rather than supplying an end-to-end solution (like Mender). It should allow us to do our own stuff more easily while still relying on some of the implementations that are already inside the framework. Aside of that, os-tree also looks very promising on paper but is to far fetched from my current solution and will probable require a bigger overhaul. Enter swupdate.
Swupdate and Yocto
Adding swupdate to your Yocto build is as easily as downloading meta-swupdate sources and adding the meta layer to your bblayers.conf. Well that’s the theory… Although the docs claim that you should be fine using u-boot 16.05, and mine was 17.03, bitbaking failed because of some missing function calls that are needed to write to the u-boot environment. For that functionality Swupdate relies on u-boot-fw-utils. More recently they also started offering an alternative called libubootenv. The problem with libubootenv is that it was not yet introduced in the Yocto Rocko (2.4) branch that I’m on. Only the more recently branches of meta-swupdate contain a recipe for using libubootenv as alternative to u-boot-fw-utils. I tried the Zeus (3.0) branch, made sure to set the PREFERRED_PROVIDER to libubootenv , and made sure that all temporary build files from the u-boot-fw-utils recipe are deleted (important!). Now everything was bitbaking fine. After creating a new image I booted my target device and at least swupdate was working, plus it was also hosting its “Mongoose” update website on port 8080.
I also found little issues creating a valid cpio archive that contains the update manifest and artifacts. For example I could make sure that the updater checks the board’s hardware compatibility, and deploys the rootfs to my partition of choice. After having experimented a few things I found that Swupdate does fine in parsing the update manifest, fetching artifacts, and deploying the stuff that we want. But other questions arise: how can we have a rollback mechanism when things go wrong? And can we do a rollback automatically for our devices in the field? How can we reduce the downtime during the upgrade? Because what we want to avoid are scenarios such as with the Windows Update system which takes an endless amount of time during reboot to perform its tasks, rendering the device useless endlessly.
Dual rootsfs with rollback in u-boot
What we want is something as following… One rootfs partition (A) is active and executing, the other one (B) is used for the update. When a new updates arrives it goes into B, while rootfs in A is active. After reboot B becomes the active rootfs and A can be used for updates. If anything goes wrong during the update to B, we should still be able to load A because it was working fine for us previously. E voila, we got ourselves a dual rootfs with rollback mechanism.
For our embedded device the bootloader assures which rootfs (A or B) is loaded. The u-boot bootloader relies on environment variables to select which partition contains the rootfs of our Linux system. The rootfs partition is passed into the kernel as kernel argument. Swupdate has support for updating such u-boot environment variables from linux userspace, though it doesn’t offer a fully working dual rootfs with rollback mechanism by itself. The swupdate docs introduce a high level overview on how you could implement this yourself. But for anything bootloader related they refer to the u-boot docs. Before we dive into that, what you should do is making sure you have partitioned your device to include 2 root filesystems. I created following partitions on my target device:
/dev/mmcblk2 (emmc device, 16Gb)
/dev/mmcblk2p1: boot (fat32, 32Mb)
/dev/mmcblk2p2: rootfs1 (ext4, 2Gb)
/dev/mmcblk2p3: rootfs2 (ext4, 2Gb)
/dev/mmcblk2p4: data (ext4, 10Gb)
Next up is adding support in u-boot for changing the active rootfs partition. The bootcmd is executed by u-boot when going from bootloader stage to init kernel stage. U-boot also tells us on which device the kernel can find the rootfs. It’s passed as kernel argument using the bootargs variable. For example it could say:
bootargs root=/dev/mmcblk2p2 rdinit=/bin/kinit rw single
Editing this variables will make sure that the kernel looks for the rootfs in some other place. For example, when we use below modification the rootfs will be loaded from the third partition instead if the second:
bootargs root=/dev/mmcblk2p3 rdinit=/bin/kinit rw single
In this case it easier to store the rootfs partititon as a variable by itself so that when we update the bootargs we don’t discard any other modifications to it:
rootfspart 3
bootargs root=/dev/mmcblk2p${rootfspart} rdinit=/bin/kinit rw single
We can either alter the variable inside u-boot using the setenv command, or from Linux userspace using the fw_setenv tool provided by libubootenv (a binary compatible u-boot-fw-utils alternative). Swupdate will need to set the correct rootfs partition using fw_setenv after it has successfully deployed a rootfs update. Upon next boot, u-boot will pickup the updated variable and switch to the new rootfs.
However, when things go wrong and we’re unable to enter linux userspace using that new rootfs we want some system to detect these kind of errors. U-boot comes with bootcount and bootlimit support, but in many cases you still need to enable it before you can start using it. You need to add the support at compile time, in your u-boot source code you need to search the header file that adds support for your board. It’s found under the include/configs directory. Add:
CONFIG_BOOTCOUNT_LIMIT will add support for a bootcount variable. CONFIG_BOOTCOUNT_ENV makes sure that the bootcount variable is stored in the u-boot uenv so that after reboot tits value is not discarded. Each time the system is reset (not power cycled!) the bootcount variable increments and its updated value stored in the uenv. We can compare the bootcount to a bootlimit variable and use that to swap rootfs partitions. The actual comparison is already being taken care of in u-boot, you only need to setup the bootlimit variable (for example: setenv bootlimit 5) otherwise the bootcounter will be ignored by u-boot. If the bootlimit is reached, u-boot will run the altbootcmd instead of the usual bootcmd. Altbootcmd is by default not defined in u-boot, that you also have to do yourself. One use case is that altbootcmd can make sure that the rootfspart variable that I’ve introduced earlier is being swapped between 2 and 3, and next call the normal boot command (bootcmd). Another thing you need to take care of is that Linux userspace will also need to reset the persistently stored bootcount variable at each boot in order to prevent the bootlimit from being reached when our system is doing fine.
One more thing about the bootcount variable. The variable is write protected by another variable called upgrade_available. The latter, when not set, will prevent u-boot from actually writing the incremented bootcount variable to the u-boot environment. Hence, bootcount won’t increment as long as upgrade_available is unset. It’s introduced to prevent writing to the uenv at each boot thus lowering the wearing and reducing any issues that could occur due to power loss while writing. In linux userspace you should also check the upgrade_available variable first before resetting the bootcount.
In the end… what swupdate needs to do after it has deployed its artifacts is making sure that the upgrade_available variable is set which will enable the bootcounter upon next reboot. If all goes well the new rootfs will boot into linux and some script will unset the upgrade-available variable and reset the bootcount. However if things go wrong the bootcount will be increased and the system will reset until the bootlimit is reached. Now we will rollback into the working rootfs where we started the upgrade from. That same script will verify all is ok and unset the upgrade-available variable and reset the bootcount. The device should also notify to the end customer that the update failed. At next boot the device will keep booting into the “old” and stable rootfs. The user will have to reapply a new update after verifying why the previous update failed.
For all of this this to work we need to edit the CONFIG_EXTRA_ENV_SETTINGS statement in the u-boot sources. Its found in the same file where you set the CONFIG_BOOTCOUNT_LIMIT. Add following lines:
The modifications set the bootlimit to 5, and set the default routfs partition to 2. The altbootcmd makes sure we can switch partitions during rollback and the modified bootargs assures that the rootfs partition is loaded from an uenv variable.
Rollback in action
With that integrated in our bootloader we can start testing the rollback feature. Update your sdcard/emmc image and run it with your device. It should boot as always using the bootcmd variable, and load the rootfs in partition 2. At this stage, partition 3 is still empty. Once you’re in linux, check the uenv using fw_printenv. You should see the newly added bootcount and such vars. If it’s not the case, make sure to reset u-boot to its default variable values. Next we’re going to enable the bootcounter, so execute:
$ fw_setenv upgrade_available 1
Note that we haven’t implemented any script yet that resets the upgrade_available and bootcount variables. So by sending a reboot command we will see the bootcounter incrementing much alike in situations where a watchdog would kick in whenever loading the rootfs hangs. Now reboot the system from u-boot all the way up to linux and back using the reboot command, and repeat until the bootlimit is reached. At this point you’ll see some extra debug lines during the bootloader stage explaining that the altbootcmd is used:
Warning: Bootlimit (3) exceeded. Using altbootcmd.
Hit any key to stop autoboot: 0
Saving Environment to MMC...
Writing to MMC(0)... done
WARN: rollback RootFS to /dev/mmcblk2p3
Furthermore since partition 3 (/dev/mmcblk2p3) is still empty your linux should now also fail to boot due to missing rootfs. In the bootlog you’ll see a kernel panic:
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(179,10)
To overdue this you can easily go back into the u-boot shell and setting the rootfspart variable back to 2. Though, this is also a good moment to install a secundary rootfs in partition 3 to test if you can successfully start the updated rootfs. I’m not covering this, but I’m expecting you did that.
Preventing rollback in sunny-day scenarios
The next step is to make sure that, once your updated Linux os is up and running, you have a script executed that disables the bootcounter. It won’t go to much into detail here, but it could be as easily as having the underneath bash script executing through your init system of choice:
#!/bin/sh
# Always check if the upgrade_available var is set
# to reduce write cycles to the uenv.
ISUPGRADING=$(fw_printenv upgrade_available | awk -F'=' '{print $2}')
echo "upgrade_available=$ISUPGRADING"
if [ -z "$ISUPGRADING" ]
then
echo "No RootFs update pending"
else
echo "RootFs update pending, verifying system"
# Perform extra checks here.
# If anything went wrong, reboot again until the bootlimit is reached
# which triggers a rollback of the RootFs
fw_setenv upgrade_available
fw_setenv bootcount 0
fi
You may have higher demands in verifying if the systems is running well such assuring that your application is running. Or maybe you want to assure that your internet connection is up, or that your device is able to notify the remote update server your os version and such. I leave that up to you…
Watching kernel panics
From what we noticed earlier, sometimes things go wrong and our rootfs fails to load, hence a kernel panic is triggered. For testing purposes you may also wipe one of your partitions: wipefs -a -t ext4 -f /dev/mmcblk2p3. It will trigger that same kernel panic we saw earlier. Unfortunately this will lock our device into a failed state and a manual reset will need to be performed. Sometimes that may be desirable, but in many cases you’ll want the show to go on. There are some ways to make the device autoreboot when such scenarios occur. Some may want to use a (external) watchdog to catch any errors from happening but I found that using the kernel’s panic reset system was a very easy way to get some sort of similar behavior. This kernel features makes sure that whenever a kernel panic occurs the system will be resetted. One way to set this up is feeding following kernel argument in u-boot:
panic=5
It will trigger a reset 5 seconds after a kernel panic occurred:
No filesystem could mount root, tried: ext3
ext2 ext4
vfat
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(179,10)
CPU0: stopping
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.9.88-9512b3d443a53afbc8c7c18894249f78b62cc324+g9512b3d #1
...
Rebooting in 5 seconds..
U-Boot SPL 2017.03-c94efdc139f6a6c193aaf77f171a01d09686451c+gc94efdc (Jul 14 2020 - 09:46:33)
Integrating the u-boot environment in Swupdate
Then there also the swupdate manifest, or as they call it: the sw-description.
software =
{
version = "2.3.0";
mylinuxboard = {
hardware-compatibility: [ "1.0" ];
rootfs1: {
images: (
{
filename = "rootfs.ext4.gz";
compressed = "zlib";
installed-directly = true;
device = "/dev/mmcblk2p2";
}
);
bootenv: (
{
name = "rootfspart";
value = "2";
},
{
name = "upgrade_available";
value = "1";
}
);
scripts: (
{
filename = "resizeRootsfs.sh";
type = "postinstall";
data = "2"
}
);
}
rootfs2: {
images: (
{
filename = "rootfs.ext4.gz";
compressed = "zlib";
installed-directly = true;
device = "/dev/mmcblk2p3";
}
);
bootenv: (
{
name = "rootfspart";
value = "3";
},
{
name = "upgrade_available";
value = "1";
}
);
scripts: (
{
filename = "resizeRootsfs.sh";
type = "postinstall";
data = "3"
}
);
}
}
}
This describes the software infrastructure, and is a manifest used by swupdate to update parts of your system. In our case it defines that we have under our software collection stuff specially made for the “mylinuxboard” target which has revision “1.0”. It has 2 sub-collections that defines the updates for the rootfs’es on partition 2 and 3. The 2 sub-collections each contain an image part which handles the actually copying of the compressed rootfs into the target partition. And they also contain another part which describe the bootloader integration code to execute. On our case it defines the u-boot uenv code to update using the fw_setenv (more or less). So what we do here is not only making sure that the rootfs is deployed into the correct partition, we also enable the u-boot bootcounter (through upgrade_available) and set the target partition that we want to start using after reboot so that the newly updated rootfs is being used.
We can now create the update archive that contains the sw-desciption and all files that need to be deployed. From Yocto you can create a recipe to do that, but we can also do it from command line using following script:
#!/bin/bash
CONTAINER_VER="1.0.0"
PRODUCT_NAME="my-software"
FILES="sw-description \
resizeRootsfs.sh \
rootfs.ext4.gz \
"
for i in $FILES;do
echo $i;done | cpio -ov -H crc > ${PRODUCT_NAME}_${CONTAINER_VER}.swu
We can now execute swupdate using the .swu archive we just created:
Swupdate v2019.11.0
Licensed under GPLv2. See source distribution for detailed copyright notices.
Running on mylinuxboard Revision 1.0
Registered handlers:
dummy
uboot
bootloader
flash
lua
raw
rawfile
rawcopy
shellscript
preinstall
postinstall
software set: mylinuxboard mode: rootfs2
[TRACE] : SWUPDATE running : [network_initializer] : Main loop Daemon
[TRACE] : SWUPDATE running : [extract_sw_description] : Found file:
filename sw-description
size 2018
checksum 0x1b90d VERIFIED
[TRACE] : SWUPDATE running : [listener_create] : creating socket at /tmp/sockinstctrl
[TRACE] : SWUPDATE running : [listener_create] : creating socket at /tmp/swupdateprog
[TRACE] : SWUPDATE running : [get_common_fields] : Version 2.3.0
[TRACE] : SWUPDATE running : [parse_hw_compatibility] : Accepted Hw Revision : 1.0
[TRACE] : SWUPDATE running : [parse_images] : Found compressed Image: rootfs.ext4.gz in device : /dev/mmcblk2p3 for handler raw
[TRACE] : SWUPDATE running : [parse_bootloader] : Bootloader var: upgrade_available = 1
[TRACE] : SWUPDATE running : [parse_bootloader] : Bootloader var: rootfspart = 3
[TRACE] : SWUPDATE running : [check_hw_compatibility] : Hardware mylinuxboard Revision: 1.0
[TRACE] : SWUPDATE running : [check_hw_compatibility] : Hardware compatibility verified
[TRACE] : SWUPDATE running : [cpio_scan] : Found file:
filename resizeRootsfs.sh
size 568
REQUIRED
[TRACE] : SWUPDATE running : [cpio_scan] : Found file:
filename rootfs.ext4.gz
size 239585335
REQUIRED
[TRACE] : SWUPDATE running : [install_single_image] : Fo mmcblk2: p1 p2 p3 p4
und installer for stream rootfs.ext4.gz raw
-----------------------
| RESIZING ROOTFS |
-----------------------
Using /dev/mmcblk2p2
e2fsck 1.43.5 (04-Aug-2017)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mmcblk2p2: 43400/304160 files (0.1% non-contiguous), 240702/304128 blocks
resize2fs 1.43.5 (04-Aug-2017)
Resizing the filesystem on /dev/mmcblk2p2 to 524288 (4k) blocks.
The filesystem on /dev/mmcblk2p2 is now 524288 (4k) blocks long.
[TRACE] : SWUPDATE running : [execute_shell_script] : Calling shell script /tmp/scripts/resizeRootsfs.sh 2: return with 0
Software updated successfully
Please reboot the device to start the new software
[INFO ] : SWUPDATE successful !
mmcblk2: p1 p2 p3 p4
Making it more robust
The above solution is a great start for most projects. However if you want to make it robust and production proof there are some more things that you could do:
Don’t store the u-boot bootcounter in the u-boot env. U-boot also supports storing in RAM, RTC, etc. It reduces write cycles but more importantly its a safer way of updating the bootcounter when a power loss occurs.
Use a dual u-boot-environment. If you have only one, a power strike during updating the uenv could have catastrophic results.
Have a dual boot partition. It will allow you to safely update your dtb and kernel in the same manner as the rootfs is updated.
Sign your artifacts. It assures that the distributor of the updates can be trusted, so that we can take for granted the fact that our update server is our own server and not someone else his.
Setup a watchdog that resets the device whenever boot issues occur, for example loading the rootfs not found.
Secure your firmware storage server so that your firmwares can only be downloaded by your software and no one else
With the Corona COVID-19 virus among us I’ve moved my development setup from work to my place at home. My working hours have also been cut roughly in half which leaves some time to experiment with the hardware I normally only have available at work. One of our development devices features a STM32 F429 chip, SSD1309 OLED, and some sensors. All together its a very good candidate to run some custom demo’s that I’ve been willing to try for some time now.
I’m already aware LittlevGL for some time now, but since we don’t have any project running which requires a GUI library I never l looked any further into the matter. Last summer I saw a demo of Qt on MCU and I was astonished by the outcome of it. It looked very good but unfortunately also requires quite a beefy MCU. Fast forward to the beginning of this year. The Elector magazine had an interview with LittvGL author Gábor Kiss which certainly brought up some sympathy and respect for what this guy (and the many contributors) have been building lately.
So, wanted something on my hands and started looking into the matter. I was pleased to find that LittlevGL has already been ported to the mbed-os that I’m targeting. Furthermore I also found some other blog posts such as “Porting LittlevGL for a monochrome display” and “LittlevGL on a Monochrome OLED” to further guide me to get the bits working.
The target display is a monochrome 128×64 pixels OLED which is a far cry from the many other popular embedded displays. Anyhow, LittlevGL has a monochrome theme so it shouldn’t stop you from using it. Here is the result:
The hardest part of making your port work is grasping how to translate LittlevGL’s inner working to your display driver. For example, LittlevGL draws/updates only part of the screen that has changed. It can also work with a framebuffer that’s smaller than your actual display size. Once you understand such features and when you figured out how to drive your display, writing the LittlevGL implementation isn’t too hard. LittlevGL has decent documentation and is much lower on resources compared to Qt’s counterpart. I can definitely recommend it for any if your MCU projects that require a GUI. It’s also very fast and which resulted in very smooth animations.
Sad news came in yesterday: world famous mathematician John Conway passed away after being infected with COVID-19. As a tribute here is my implementation of his Game Of Life. Its running on development board made by my company Alphatronics which features STM32F429 and SSD1309 OLED.
If you’d ever wonder how much time you spend on your emails then you’ve come to the right spot.
There are some commercial packages available. However, basic functionality can be achieved easily with just a few lines of VBA!
First we need to make sure the developers menu option is available:
Hit “File“, and select the “Options” menu item.
In the options window make sure the Developers option is selected:
Outlook will now get the Developers menu option which allows us to add our own VBA scripts. To install our script, navigate as following…
Go to the developers menu, on open the Visual Basic window:
Edit the “ThisOutlookSession” object:
Paste following script:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Restart Outlook. Now, each mail you read, edit, or create will be tracked. You’ll get a résumé each time you close your mail that tells you how much time you’ve spend on that mail, in during the total Outlook session:
Please fork my github gist as you please to add extra functionality!
Back in 2011 the Dell XPS 15 L502x offered a lot of goods considering its price. The build quality was very good and it also looked quite appealing given its competitors at that time. On the performance side there we’re a lot of options to customize the XPS up to your needs, including GeForce MX, 8GB RAM, a SSD, i7, …
I started with a more budget friendly configuration including a traditional HDD, Intel i5-2410M, 4GB RAM and Windows 7. Over the years my demand for higher performance hardware has lead me to replace the HDD with a SDD, and upgrade the RAM to 8Gb. I’ve also dropped Windows entirely in favor of Ubuntu.
I’ve now come to a point where i’m doing most of my work in Linux and so the current hardware is still decent enough to pull it all together. Though once in a while you bump into software that is only supported on Windows / MacOs and so like many of you there is a need for running software in some sort of virtual machine. For me that is VirtualBox, it’s not perfect but frankly I don’t need it all that often so I can perfectly live with it.
Since Windows10 is quite the memory hog I found that running it inside a VM would sometimes put me in the situation where I’m running out of RAM. As a result paging would kick in and performance would drop tremendously. As a quick and relatively cheap fix (at least compared buying a new machine) I decided to upgrade my RAM to 16Gb since decent kits can be found for roughly €80.
What I had:
4Gb Adata RAM 1333MHz + 4Gb Corsair ValueSelect RAM 1066MHz
What I’ve upgraded to:
2 x 8Gb Corsair Vengeance RAM 1600MHz
Note that officially this configuration is not supported by Dell because at the time this laptop was sold the 8Gb DRAM stick was non-existing.
Wether to check if the upgrade was worth the deal I ran I couple of tests using the Phoronix automated benchmark suite. Here are the results for running it native under Ubuntu 18.04 LTS:
We notice an overall system performance improvement from our RAM upgrade, though the difference is in most occasions rather small given that we’re never utilizing more than 8Gb of memory. The only performance increase can come from the lower latency RAM. Regarding the negative results in Gimp, I bet that’s an anomaly in my tests so take that results with a grain of salt.
More interestingly is to see what happens inside the VM since this is where we may run into problems. After the upgrade I was able to higher the virtual DRAM size from 5Gb to 8Gb. Note that the Intel HD3000 is used as GPU. It also uses the system DRAM which make it compete with the CPU for memory access. I also suggest not to compare these results with running them under Linux native since the Windows 10VM is running on a HDD instead. Here are the results when running the benchmark inside Windows 10 virtual machine hosted by Ubuntu 18.04LTS:
We can clearly see a bigger improvement here as expected. The main point of the article is that now our Windows 10 VM can take up to 8Gb RAM which at least gives us enough headroom to run some memory exhaustive applications.
Although I’ve missed most of the hype around the original Doom game back in the 90’s, I did get to play it at a friends place. But it was only until I started programming that I picked it up again after reading Master’s of Doom.
When I started working on a embedded Linux device last year based on the i.MX6 processor the idea began to grow to compile Doom for our custom Linux based OS as some sort of easter egg. Unfortunately the world is real and deadlines are always too short and I had to let go of this idea. More recently however some of our dev-boards had to be archived and so I took this opportunity to take one home for a short period of time and finally get this one settled for once and for all.
One way to get it working is to setup a cross-compilation toolchain and cross-compile one of the many source ports of the doom engine. Another way would be to properly integrate it with the build of our custom Linux OS. Since we’re using Yocto to build our image the idea was to create a separate meta-layer that includes everything you need. You can find the meta-layer at github/geoffrey-vl/meta-doom.
Initially I started integrating the prboom engine. I found out that the out-of-three build wasn’t working so well and I’ve bumped into some other issues’s as well. I had more luck using chocolate-doom which is better maintained. Chocolate-doom only recently switched over to using the SDL2 library so to be on the safe side I went to the latest version that runs on SDL(1). The game engine also requires libsdl-net which is currently not available in the official yocto repo’s. Luck was on my side when I bumped into a working libsdl-net recipe through google-search.
With the engine compiling happily I stumbled upon licensing issues. You have to own the game (and its game aka WAD files), so I couldn’t distribute anything that would be playable unless the user would copy its WAD files to our embedded system. Luckily there is the Freedoom project, a open-source implementation of the doom game. I also found a working recipe for Freedoom and so moments later my workstation produced a ready-to-play open-source implementation of the immensely populair Doom game.
Just for the kicks I also loaded my own WAD files, here is the result:
From past experiences with the Freescale Community BSP I was recently wondering if the same tooling was already setup for one of the largest community targetted board around: the Raspberry PI. And it seems it is not!
I went ahead and put it together. You can now easily initialize, synchronize and get your builds running by using Google’s Repo tool in conjuction with Yocto.
The repo can be found below together with instructions:
At the end of the commands you have every metadata you need to start work with. The source code is checked out at rpi-community-bsp/sources. You can use any directory to host your build. As a personal favor I’m using rpi-community-bsp/build as build folder.