You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After seeing @geerlingguy's video about 10" racks, I knew that at some point I'd have to build a tiny rack β and now it's finally my turn π
My small LAN party (30β50 guests) in Germany was my main homelabbing project, and over-engineering was my passion. We're talking about four Dell R640 servers clustered together with Proxmox and Ceph. Sadly, at the last event, two servers died, so it was time for something new. Since I didn't want to lift those heavy servers anymore, the 10" rack project was exactly what I needed β no more basement setup and no more back pain.
Defining the goal
First, I had to define my goals. They were as follows:
Small and lightweight
Dedicated machines for critical network systems like firewall, DNS, and DHCP
One compute machine to provide game servers and LANCache
Just plug in power and the network uplink and be ready to go
Reduce complexity by removing things like Ceph and Proxmox clustering
Planning the rack
After defining my goals and figuring out which parts I could reuse and which I needed to buy, the research began.
After gathering the parts, I started planning the rack layout. Especially with the 20 cm depth, I had to plan the old-fashioned way:
This is the layout I'm currently planning to use:
Let's see if the bottom shelf with the PSU and PDU will work out β theoretically, it should.
Part usage
U1: MikroTik CRS305-1G-4S+IN
This little switch is perfect for my setup: VLAN support, 10 Gbit/s capability, and a modular design. For mounting, I found a 3D model that I'll try to print in HT-PLA, since it can get quite warm and even PETG might not be ideal.
The switch can get hot, so I decided to print a 40 mm fan holder for the front shelf.
The 3D print was a perfect fit:
U2βU3: ThinkCentre M75q
These machines will be the hypervisors for the critical network infrastructure and will host pfSense, DNS, and DHCP. Everything will be redundant, so if one Tiny fails, the other hypervisor keeps running and users won't notice.
pfSense uses CARP for its HA system. To avoid weird behavior in case of a switch failure, I wanted a dedicated link between the two nodes. For that, I needed to install a second NIC. AMD-based Lenovo systems don't support a PCIe riser card, so I installed an adapter that converts the M.2 E slot (where the Wi-Fi card normally sits) into an RJ45 port:
After receiving the cards, I tried installing one next to the DisplayPort connector β normal motherboard screws worked perfectly. Unfortunately, the frame was slightly too short and had to be cut away. For the first functional test, I installed it like this:
Luckily, the card was detected and usable right away.
Cutting metal away
As mentioned, I wanted to have the DisplayPort connector and the 2.5 Gbit/s NIC side by side. And as you can see, this little piece of metal was in the way:
This meant cutting off that part to get a perfect fit again:
Bad fit for the 2.5 Gbit/s card
Sadly, the card is positioned in a way that the RJ45 connector doesn't lock properly, which means the cable can be pulled out quite easily.
Frying the motherboard
Well, I learned a painful lesson: tape the internal Wi-Fi antenna! From one day to the next, the Tiny didn't boot at all. The motherboard was most likely shorted and completely unresponsive. Luckily, Lenovo warranty stepped in and swapped the motherboard. Now the node is booting again π
U4: ThinkCentre M720q
This machine will serve as a backup server and Proxmox Datacenter Manager. Since this is an Intel-based system, I can attach a 10 Gbit/s NIC. The problem: this NIC gets quite hot, and there's no airflow in that part of the case. Luckily, this designer created a solution to prevent overheating.
First print and learnings
After seeing the STL, I felt confident and printed it in PLA to test the fit. And guess what⦠it didn't fit.
First, the fan sat too close to the edge and collided with the case:
Because of that, the case couldn't be closed. So I edited the STL and moved the fan holes a bit downward β successfully:
Final print and fit
The final print was done shortly after:
Printing was easy and the result looks fantastic. I only noticed that I had cut away too much of the baffle, which caused a small break β but it doesn't matter, as the baffle is still stable.
In the end, this was the finished result:
Stability issues
After installing Proxmox and deploying the Proxmox Backup Server VM, the SSD stopped working at 8% progress β no I/O at all. After a reboot, I tried again, and once more it got stuck at 8%. It looked like the SSD was overheating, so I limited the transfer speed to 100 MiB/s β and that worked like a charm. New thermal pads were ordered to see if I could improve heat dissipation for the SSD.
Results: The thermal pads helped a lot. For some reason, the temperature dropped by 20 Β°C! Deploying the backup twice to the drive worked without any issues.
U5: SSD bay
To have enough "disposable" storage (and because I still had these SSDs lying around), I decided to buy a simple 4Γ 2.5" bay for a 5.25" slot and 3D-print a suitable bracket for it.
First print
Well⦠this was a fail. Sadly, the model was not big enough for the drive bay, and I had to destroy it because I wasn't able to remove the drive bay from the 3D model.
I'm now printing a different model β this one should work π€£
U6βU7: Mini-ITX system
Choosing the right components
This system has two purposes: acting as a LANCache and providing enough compute power to host the local WordPress site, game servers, etc. That means it needs at least one 10 Gbit/s network interface. Checking the market wasn't easy. I could either buy an AM4 board with two integrated 10 Gbit/s NICs and enough SATA ports for over β¬500 β but on an outdated platform β or buy something more modern.
Looking at current sockets and chipsets, I searched for Mini-ITX boards with four SATA ports. Guess what: only one AM5 board and two LGA1851 boards exist. Everything else only has two SATA ports and no 10 Gbit/s NIC, leaving no option to add both an HBA and a 10 Gbit/s NIC. Using an M.2 slot as a PCIe slot wasn't an option, because I want the OS on a ZFS mirror. Motherboards with three M.2 slots were even more expensive π’
So the choice was clear: a Mini-ITX board with four SATA ports β either AMD or Intel. Specs and prices were nearly identical. The Intel Ultra 7 255 has more "cores" than the Ryzen 9 7900, but the P/E-core design and Intel's frequent socket changes didn't appeal to me. So I went with AMD.
Finding the best case
Jeff listed some options for rackmountable cases. At first, I considered a 1U case, but something always didn't work out β either the low-profile PCIe card didn't fit, or the PSU had to be installed in another U. The 2U case from MyElectronics wasn't an option either, as it requires a PicoPSU, which only goes up to 160 W. A Ryzen 9 7900, four SATA SSDs, two NVMe SSDs, and a 10 Gbit/s NIC would have pushed that far beyond its limits.
Then I discovered the R-Case 7 β it supports two PCIe cards and even a FlexATX PSU. Checking the website, I noticed that for Germany only eBay was available, so I contacted the manufacturer directly via their contact form β and it was worth it! Huge shoutout to Rawhardware β he answered within a few hours, and we managed to complete the order via invoice, saving the commission fees.
I ordered the RAL version, meaning the case gets powder-coated and includes a metal power LED button. The cool thing: you can order it in any RAL color! I chose RAL 6038 (neon green), which matches our LAN party's logo. It should arrive at the end of November, and I can't wait to see it in person.
Failing the part list
Choosing the Ryzen 9 7900 had multiple reasons, for example that it includes a CPU cooler. One night, I thought to myself: Did I actually check the cooler height? And guess what β the cooler didn't fitβ¦ So I had to buy another cooler and went with the AlpenfΓΆhn.
Another fail was the PSU. My first idea was using a SilverStone SST-FX350-G, but after assembling everything in the case, it became clear: I should have bought a PSU with proper cable management instead of a huge cable mess coming out of the PSU:
New PSU, wrong connection
Using the FSP FlexGuru 300 taught me a lesson: Search on the internet how to connect it. I never saw a PSU where I need to connect a ring for power. This little connector was the horror:
Fan situation / Loud and blocked!
It is very important to use fan decouplers on that case! Otherwise, rattling noise will be there.
IMG_9488.mp4
In addition, you should use fan grills on the CPU cooler, otherwise it will be blocked on the AlpenfΓΆhn Panorama 2. Too bad that the 92mm fan uses non-standard screws, so I had to be creative about mounting the fan grill and used the rubber corners for that:
Remove the IO Shield of the motherboard!
Very, very important: if you use a motherboard with an installed IO Shield, remove it! Otherwise, the case doesn't close.
U8 Front: PSU shelf
The Lenovo Tinys need power and come with external power bricks. I got three 135 W bricks to have enough headroom and planned to place them on the shelf. Lucky me β they fit:
In the end, I used double-sided tape to secure the bricks in position:
U8 Back: PDU
Nothing special about this part β I just hope the depth works out together with the PSU shelf. Otherwise, I'll have to find another spot for it.
In the end, with the correct cable lengths that I linked before, it was possible to connect it:
End Result
In the end, after a lot of work, the rack is finished:
After seeing @geerlingguy's video about 10" racks, I knew that at some point I'd have to build a tiny rack β and now it's finally my turn π
My small LAN party (30β50 guests) in Germany was my main homelabbing project, and over-engineering was my passion. We're talking about four Dell R640 servers clustered together with Proxmox and Ceph. Sadly, at the last event, two servers died, so it was time for something new. Since I didn't want to lift those heavy servers anymore, the 10" rack project was exactly what I needed β no more basement setup and no more back pain.
Defining the goal
First, I had to define my goals. They were as follows:
Planning the rack
After defining my goals and figuring out which parts I could reuse and which I needed to buy, the research began.
Parts list
Buyable parts
SilverStone SST-FX350-GSchuko β 2Γ C5 CablePWM Splitter β 1Γ 4-Pin PWM to 3Γ PWM3D-printed parts
10" Rack 5.25" Drive Adaptershttps://www.thingiverse.com/thing:6859441Rack layout
After gathering the parts, I started planning the rack layout. Especially with the 20 cm depth, I had to plan the old-fashioned way:
This is the layout I'm currently planning to use:
Let's see if the bottom shelf with the PSU and PDU will work out β theoretically, it should.
Part usage
U1: MikroTik CRS305-1G-4S+IN
This little switch is perfect for my setup: VLAN support, 10 Gbit/s capability, and a modular design. For mounting, I found a 3D model that I'll try to print in HT-PLA, since it can get quite warm and even PETG might not be ideal.
The switch can get hot, so I decided to print a 40 mm fan holder for the front shelf.
The 3D print was a perfect fit:
U2βU3: ThinkCentre M75q
These machines will be the hypervisors for the critical network infrastructure and will host pfSense, DNS, and DHCP. Everything will be redundant, so if one Tiny fails, the other hypervisor keeps running and users won't notice.
pfSense uses CARP for its HA system. To avoid weird behavior in case of a switch failure, I wanted a dedicated link between the two nodes. For that, I needed to install a second NIC. AMD-based Lenovo systems don't support a PCIe riser card, so I installed an adapter that converts the M.2 E slot (where the Wi-Fi card normally sits) into an RJ45 port:
After receiving the cards, I tried installing one next to the DisplayPort connector β normal motherboard screws worked perfectly. Unfortunately, the frame was slightly too short and had to be cut away. For the first functional test, I installed it like this:
Luckily, the card was detected and usable right away.
Cutting metal away
As mentioned, I wanted to have the DisplayPort connector and the 2.5 Gbit/s NIC side by side. And as you can see, this little piece of metal was in the way:
This meant cutting off that part to get a perfect fit again:
Bad fit for the 2.5 Gbit/s card
Sadly, the card is positioned in a way that the RJ45 connector doesn't lock properly, which means the cable can be pulled out quite easily.
Frying the motherboard
Well, I learned a painful lesson: tape the internal Wi-Fi antenna! From one day to the next, the Tiny didn't boot at all. The motherboard was most likely shorted and completely unresponsive. Luckily, Lenovo warranty stepped in and swapped the motherboard. Now the node is booting again π
U4: ThinkCentre M720q
This machine will serve as a backup server and Proxmox Datacenter Manager. Since this is an Intel-based system, I can attach a 10 Gbit/s NIC. The problem: this NIC gets quite hot, and there's no airflow in that part of the case. Luckily, this designer created a solution to prevent overheating.
First print and learnings
After seeing the STL, I felt confident and printed it in PLA to test the fit. And guess what⦠it didn't fit.
First, the fan sat too close to the edge and collided with the case:
Because of that, the case couldn't be closed. So I edited the STL and moved the fan holes a bit downward β successfully:
Final print and fit
The final print was done shortly after:
Printing was easy and the result looks fantastic. I only noticed that I had cut away too much of the baffle, which caused a small break β but it doesn't matter, as the baffle is still stable.
In the end, this was the finished result:
Stability issues
After installing Proxmox and deploying the Proxmox Backup Server VM, the SSD stopped working at 8% progress β no I/O at all. After a reboot, I tried again, and once more it got stuck at 8%. It looked like the SSD was overheating, so I limited the transfer speed to 100 MiB/s β and that worked like a charm. New thermal pads were ordered to see if I could improve heat dissipation for the SSD.
Results: The thermal pads helped a lot. For some reason, the temperature dropped by 20 Β°C! Deploying the backup twice to the drive worked without any issues.
U5: SSD bay
To have enough "disposable" storage (and because I still had these SSDs lying around), I decided to buy a simple 4Γ 2.5" bay for a 5.25" slot and 3D-print a suitable bracket for it.
First print
Well⦠this was a fail. Sadly, the model was not big enough for the drive bay, and I had to destroy it because I wasn't able to remove the drive bay from the 3D model.
I'm now printing a different model β this one should work π€£
U6βU7: Mini-ITX system
Choosing the right components
This system has two purposes: acting as a LANCache and providing enough compute power to host the local WordPress site, game servers, etc. That means it needs at least one 10 Gbit/s network interface. Checking the market wasn't easy. I could either buy an AM4 board with two integrated 10 Gbit/s NICs and enough SATA ports for over β¬500 β but on an outdated platform β or buy something more modern.
Looking at current sockets and chipsets, I searched for Mini-ITX boards with four SATA ports. Guess what: only one AM5 board and two LGA1851 boards exist. Everything else only has two SATA ports and no 10 Gbit/s NIC, leaving no option to add both an HBA and a 10 Gbit/s NIC. Using an M.2 slot as a PCIe slot wasn't an option, because I want the OS on a ZFS mirror. Motherboards with three M.2 slots were even more expensive π’
So the choice was clear: a Mini-ITX board with four SATA ports β either AMD or Intel. Specs and prices were nearly identical. The Intel Ultra 7 255 has more "cores" than the Ryzen 9 7900, but the P/E-core design and Intel's frequent socket changes didn't appeal to me. So I went with AMD.
Finding the best case
Jeff listed some options for rackmountable cases. At first, I considered a 1U case, but something always didn't work out β either the low-profile PCIe card didn't fit, or the PSU had to be installed in another U. The 2U case from MyElectronics wasn't an option either, as it requires a PicoPSU, which only goes up to 160 W. A Ryzen 9 7900, four SATA SSDs, two NVMe SSDs, and a 10 Gbit/s NIC would have pushed that far beyond its limits.
Then I discovered the R-Case 7 β it supports two PCIe cards and even a FlexATX PSU. Checking the website, I noticed that for Germany only eBay was available, so I contacted the manufacturer directly via their contact form β and it was worth it! Huge shoutout to Rawhardware β he answered within a few hours, and we managed to complete the order via invoice, saving the commission fees.
I ordered the RAL version, meaning the case gets powder-coated and includes a metal power LED button. The cool thing: you can order it in any RAL color! I chose RAL 6038 (neon green), which matches our LAN party's logo. It should arrive at the end of November, and I can't wait to see it in person.
Failing the part list
Choosing the Ryzen 9 7900 had multiple reasons, for example that it includes a CPU cooler. One night, I thought to myself: Did I actually check the cooler height? And guess what β the cooler didn't fitβ¦ So I had to buy another cooler and went with the AlpenfΓΆhn.
Another fail was the PSU. My first idea was using a SilverStone SST-FX350-G, but after assembling everything in the case, it became clear: I should have bought a PSU with proper cable management instead of a huge cable mess coming out of the PSU:
New PSU, wrong connection
Using the FSP FlexGuru 300 taught me a lesson: Search on the internet how to connect it. I never saw a PSU where I need to connect a ring for power. This little connector was the horror:
Fan situation / Loud and blocked!
It is very important to use fan decouplers on that case! Otherwise, rattling noise will be there.
IMG_9488.mp4
In addition, you should use fan grills on the CPU cooler, otherwise it will be blocked on the AlpenfΓΆhn Panorama 2. Too bad that the 92mm fan uses non-standard screws, so I had to be creative about mounting the fan grill and used the rubber corners for that:
Remove the IO Shield of the motherboard!
Very, very important: if you use a motherboard with an installed IO Shield, remove it! Otherwise, the case doesn't close.
U8 Front: PSU shelf
The Lenovo Tinys need power and come with external power bricks. I got three 135 W bricks to have enough headroom and planned to place them on the shelf. Lucky me β they fit:
In the end, I used double-sided tape to secure the bricks in position:
U8 Back: PDU
Nothing special about this part β I just hope the depth works out together with the PSU shelf. Otherwise, I'll have to find another spot for it.
In the end, with the correct cable lengths that I linked before, it was possible to connect it:
End Result
In the end, after a lot of work, the rack is finished: