<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
		<title>elmiko&apos;s notes</title>
		<description>Notes from the front lines of development</description>		
		<link>https://notes.elmiko.dev</link>
		<atom:link href="https://notes.elmiko.dev/feed.xml" rel="self" type="application/rss+xml" />
		
			<item>
				<title>KubeCon Europe 2026 Retrospective</title>
				<description>&lt;p&gt;As is becoming tradition, here is my bi-annual update to this blog with my thoughts and perceptions
from another KubeCon. This time for the Spring 2026 KubeCon in Europe. TL;DR Lots of
excitement for Kubernetes, more people than Atlanta, AI fatigue is setting in, scheduling is getting
bigger, and as always excellent hallway track conversations.&lt;/p&gt;

&lt;p&gt;KubeCon was held at the RAI center in Amsterdam Netherlands this year, the same venue we were at for
KubeCon in 2023. The RAI is a large venue and KubeCon filled it nicely, but not overly so. This year
my schedule was quite packed with the Maintainer Summit happening on Sunday, the Workload Aware Scheduling
Design Summit on Monday, and then KubeCon proper from Tuesday through Thursday. I spoke on two panels
during KubeCon and was generally quite exhausted by the end of the week, but I had a great time and KubeCon
continues to be a great investment of time and energy for me.&lt;/p&gt;

&lt;p&gt;As is becoming a sad tradition, the political world is lighting itself on fire as KubeCon is happening, with
the war in Iran being a constant note in the background. Thankfully I did not experience any travel delays
due to the conflict or the continued government shutdowns in the United States. From what I could gather, the
attendance seemed higher than in Atlanta and I have to attribute some of that to people’s comfort with
travelling to Europe.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-eu-2026-selfie.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Selfie out front of RAI&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;kubernetes-maintainer-summit-sunday-march-22&quot;&gt;Kubernetes Maintainer Summit, Sunday March 22&lt;/h2&gt;

&lt;p&gt;I started the week with my favorite activity, the Maintainer Summit. I knew going in that the summit was sold out
this year, and it sure felt like it. Every session was packed, there were many hallway conversations happening, and
there was generally a great vibe in the air. Folks were excited to see each other, talk about their projects, and
start doing some face-to-face collaboration. As in the previous edition, the conference committee really listened
to the feedback and we once again had hot meals for lunch and more unconference sessions (although it’s always a
sort of mad scramble to get the sessions in and voted on before the afternoon).&lt;/p&gt;

&lt;p&gt;After the keynotes, I attended the &lt;a href=&quot;https://maintainersummiteu2026.sched.com/event/2EWeU/ask-the-experts-kubernetes-steering-committee-kat-cosgrove-minimus-maciej-szulik-defense-unicorns-antonio-ojea-google&quot;&gt;Ask the Experts: Kubernetes Steering Committee&lt;/a&gt; session. I’ve been
keenly curious about the steering committee and these sessions really help to cement my understanding and interest.
As might be expected there were many questions for the representatives about the use of AI technologies in the
Kubernetes development process. I was encouraged by the responses to the questions about AI, in general protecting the humans is
of paramount priority. To support this, there must always be a human in the chain of responsibility when evaluating
issues and pull requests. Further, there was acknowledgement of more and more projects being inundated with generative
LLM contributions. These contributions don’t just come in the form of code, they also appear as comments and desciptions
in the various communications channels. There is broad support for any human who feels that they are being
overwhelmed by generative content. I was happy to see such a strong focus on the human element.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-eu-2026-capi.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Cluster API meetup&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I have been a contributor to the Cluster API project for many years now, mostly as the maintainer for the Cluster
Autoscaler and Karpenter providers, so it was a natural fit that I would attend the &lt;a href=&quot;https://maintainersummiteu2026.sched.com/event/2EWev/project-meeting-cluster-api&quot;&gt;Cluster API Project Meeting&lt;/a&gt;
next. This was a great discussion with the other maintainers and community members where we talked a lot about
CI and a kubetest2 deployer for Cluster API, and also about the low level mechanics of node joining and how that might
become more structured in the future. We talked about what a “conformance” with Cluster API would mean for various
providers and how that relates to the API contracts that the project defines for provider implementers. All in all this
was a great discussion and I always love getting a chance to hang out with the other Cluster API maintainers.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-eu-2026-unconference.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Maintainer summit unconference voting&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Next up on my schedule, after lunch, were the unconference sessions. There was a little confusion about these sessions
as we vote on them the day of the summit and then the conference committee has to plan where and when they will be.&lt;/p&gt;

&lt;p&gt;After a few hiccups, I had found a popular session on &lt;a href=&quot;https://maintainersummiteu2026.sched.com/event/2JTp2/unconference-session-how-should-maintainers-navigate-and-review-ai-based-prs&quot;&gt;How Should Maintainers Navigate (and Review) AI-based PRs?&lt;/a&gt;.
This session had a lot of great back and forth between the attendees about how we handle the flood of AI-based
contributions that are hitting CNCF projects. There were many people who shared stories about how their projects are being
affected and it is clear that the focus of contributions is not even across the CNCF landscape. Some of the highlights
from this session were:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;How to handle the honesty issue, are people being truthful about their use of AI?&lt;/li&gt;
  &lt;li&gt;Security vulnerabilities introduced by bots, and conversely the attention gained from bug bounty programs.&lt;/li&gt;
  &lt;li&gt;Encouraging people to speak in their own voice instead of using an AI for translation.&lt;/li&gt;
  &lt;li&gt;Protecting people’s time from being focused on overly verbose contributions with low value.&lt;/li&gt;
  &lt;li&gt;The possibility of using AI-based tools to help with the volume of PRs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The discussion was very lively and I was happy to see that many of the points made by the steering committee were being
reinforced organically by the community. Many people are still learning how to navigate this new world of software
development. Some people have had great success and others have experienced terrible failures, but it is clear that as
a community we want to work towards a world where these tools can be used safely and consciously by contributors.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-eu-2026-jack-kuba.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Maintainer summit unconference voting&quot; /&gt;&lt;/p&gt;

&lt;p&gt;As I mentioned previously, I have been maintaining the Cluster API provider for the Cluster Autoscaler for several years now.
So I was not going to miss Jack and Kuba’s session on &lt;a href=&quot;https://maintainersummiteu2026.sched.com/event/2EWf1/cluster-autoscaler-evolution-kuba-tuznik-google-jack-francis-microsoft&quot;&gt;Cluster Autoscaler Evolution&lt;/a&gt;. In this session they described
how the Cluster Autoscaler is going to transform to a code model that more closely aligns with how Karpenter is distributed,
namely as a library. I think this is an exciting step forward for the autoscaling community as it will allow the maintainers
to separate the concerns of specific providers from the core behavior. In addition to the library migration there are also
several new features that will be coming to the autoscaler that I think will be eagerly awaited by the community, for one
the ability to defragement clusters. More on these features to come in the months ahead.&lt;/p&gt;

&lt;p&gt;With my SIG Cloud Provider hat on, I have been following the Node Lifecycle Working Group’s efforts since it began last year.
While I’m not yet convinced there is much work needed to be done by the SIG, I do think it’s important to keep touch on
this work as there might be ways that the SIG can help support the new interactions being proposed. To support that, I
went to the unconference session &lt;a href=&quot;https://maintainersummiteu2026.sched.com/event/2EWhP/unconference-session-node-lifecycle-state-needs-a-real-api&quot;&gt;Node Lifecycle State Needs a Real API&lt;/a&gt;, which was a good discussion from the
maintainers of the working group about how we might identify the states needed to improve lifecycle awareness for nodes. It’s
a complex issue and I have a feel it will be solved incrementally by addressing the sub-problems, such as eviction, first.&lt;/p&gt;

&lt;p&gt;To wrap up the “business” part of my maintainer summit day (as opposed to the after hours party), I attended the SIG Autoscaling
meetup. I was a little late to this session as I had gotten into a deep hallway track discussion about improved ignition
integration in Cluster API. Regardless, I arrived in time to hear the continued discussion about the new architecture for the
autoscaler. This initiative is going to be big for this year and I expect that by the next summit in Salt Lake City much of the
initial work will be complete. Another big topic of discussion for the autoscaling community is the decoupling of the scheduler
from the core of the Cluster Autoscaler and Karpenter projects. There is a new API being developed to help with the workload and
topology aware scheduling that will help this effort and I am very curious to see how it develops.&lt;/p&gt;

&lt;p&gt;After the end keynotes, we all gathered for the &lt;a href=&quot;https://www.flickr.com/photos/143247548@N03/55163324677/in/album-72177720332630036&quot;&gt;maintainer summit group photo&lt;/a&gt; and then on to the socializing with refreshements!
The wandering photographers managed to catch Justin and I deep in a conversation about AI workflows and what the future might
hold.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-eu-2026-justin-me.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Justin and me talking&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;workload-aware-scheduling-design-summit-at-google-monday-march-23&quot;&gt;Workload Aware Scheduling Design Summit at Google, Monday March 23&lt;/h2&gt;

&lt;p&gt;Coming close on the heels of the Dynamic Resource Allocation (DRA) designs that have helped to improve the state
of advanced hardware utilization in Kubernetes, comes the next big design work: workload aware scheduling. I am thankful to
John Belamaric and Wojciech Tyczynski from Google who organized this design summit where many SIG tech leads and
chairs, as well as interested parties, were invited to participate in architecting the future of Kubernetes scheduling.
This design summit was focused on the myriad problems that need to be addressed so that we can improve the story
around workload awareness during scheduling.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-eu-2026-was-summit.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;John and Wojciech&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Workload aware scheduling speaks to the notion that there are workloads which have greater scheduling constraints than Kubernetes currently
support today. Think about applications where network topology and rack location have an impact on the results. For example, the
machine learning workloads that we see today can often request to have hardware that shares a network segment or in the most
demanding cases might need multiple racks of hardware physically located near each other. The design summit was an opportunity to
have people in one place to discuss the various issues around how the Kubernetes community can deliver these features for users.&lt;/p&gt;

&lt;p&gt;The day was separted into a few sessions and was organized in a very unconference fashion. There were two general themes that emerged
from the topics: scheduling and autoscaling. It was difficult for me to pick which tracks I wanted to participate with since I am
representing SIG Cloud Provider to see where we might help with lower level provider interactions, but I also want to keep track of what 
is happening with the autoscaler as I am keenly interested in ensuring that the Cluster API intergrations are as full featured as possible.
In the end, I chose to participate in the scheduling tracks as there were many autoscaling experts in the room and I generally was curious to
see what solutions they agreed on. I was rewarded with some deep discussions around the mechanics of how workload aware scheduling will
need to work from the hardware pespective. This ended up being very fruitful for me as I walked away with some clear ideas about how
SIG Cloud Provider might support activities where scheduling plugins will need to make calls to the underlying infrastructure provider
to learn about the hardware configurations and topologies.&lt;/p&gt;

&lt;p&gt;For anyone interested in this exciting new area of development, I would start by reviewing the DRA related mechanisms that exist in
Kubernetes today. These primitives are inspiring how the future of workload aware scheduling will be architected. Then I would study
the &lt;a href=&quot;https://github.com/kubernetes/enhancements/issues?q=label%3Aarea%2Fworkload-aware&quot;&gt;workload aware related KEPs that are currently under review&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Lastly, I am grateful for the opportunity to participate in this design summit. Although I don’t think I had much to contribute, I
certainly learned a great deal and will hopefully be able to support the effort through SIG Cloud Provider. I’ve heard that quote
about “if you are the smartest person in the room, find a new room”, and I felt humbled by the assembled brain power at the summit.
Some of the brightest and sharpest minds in the Kubernetes community were there and it was inspiring to watch the thoughtful discussions
that arose.&lt;/p&gt;

&lt;h2 id=&quot;kubecon-day-1-tuesday-march-24&quot;&gt;KubeCon Day 1, Tuesday March 24&lt;/h2&gt;

&lt;p&gt;After two solid days of work already, KubeCon proper starts. I briefly cruised the keynotes and the early talks, but my mind was focused
on the two big activities I had for day 1: the &lt;a href=&quot;https://kccnceu2026.sched.com/event/2ITlD/kubernetes-contribution-101&quot;&gt;Kubernetes Contribution 101&lt;/a&gt; session, and the panel
&lt;a href=&quot;https://kccnceu2026.sched.com/event/2EoKz/from-static-tokens-to-attestation-the-evolution-of-secure-node-joining-ciprian-hacman-jack-francis-microsoft-michael-mccune-josephine-pfeiffer-red-hat-justin-santa-barbara-google&quot;&gt;From Static Tokens to Attestation: The Evolution of Secure Node Joining&lt;/a&gt;. I was a participant in both activities and I was quite
excited for the opportunities.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-eu-2026-communityhub.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Community hub placard&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The Community Hub is one of my favorite parts of KubeCon and when the call went out for SIG chairs and tech leads to participate in the
Contribution 101 session I jumped at the chance. It was a nearly two hour session where we had a presentation about the mechanics of
contribution and the landscape of the CNCF projects, and then a question and answer portion where the SIG leads got to interact with
the audience. It was truly inspiring to see so many people interested in contributing and asking thoughtful questions about how to get
involved. Whenever I have time, I will always participate in these sessions as I love helping new folks get a boost into open source. Passing
on the open source values and ethics is a large part of what I do these days and I want to ensure that we have a healthy community for
future generations.&lt;/p&gt;

&lt;p&gt;After the 101 session I had to find the panel where I was a particpant, unfortunately the room was on the opposite end of the convention
center and I had to make haste. Thankfully, I made it with time to spare. The panel went great and we had about 150 people in the room to
hear us talk about a possible future for secure booting and attestation in Kubernetes. In some respects this is new territory for the
Kubernetes community to tackle, although there are bespoke implementations of this style of booting, we would like to develop best practices
that the community as a whole can rely on. This will be important if we want to include secure boot activities in projects like kOps and
Cluster API. While there is more work to be done, we had an excellent conversation and got many good ideas from interacting with the audience.
I’m hopeful that some of the work I’m doing with the Cluster API community to improve ignition support will also help with the workflows
around attestation.&lt;/p&gt;

&lt;p&gt;I spent the remainder of my time on day 1 cruising the solutions showcase to see what people were talking about with respect to the product
side of Kubernetes. The solutions showcase was a good time and there was plenty of room and natural light penetrating the room, making it feel
much better to walk around and get lost amongst the technology. As always, LEGO raffles were huge. I also noted the return of the curling
arena.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-eu-2026-curling.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Curling Skills Arean signage&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;kubecon-day-2-wednesday-march-25&quot;&gt;KubeCon Day 2, Wednesday March 25&lt;/h2&gt;

&lt;p&gt;In many ways, day 2 was a repeat of day 1 for me. I had plans to participate in the &lt;a href=&quot;https://kccnceu2026.sched.com/event/2H66n/kubernetes-meet-+-greet&quot;&gt;Kubernetes Meet + Greet&lt;/a&gt; and then join the
&lt;a href=&quot;https://kccnceu2026.sched.com/event/2EF68/how-will-customized-kubernetes-distributions-work-for-you-a-discussion-on-options-and-use-cases-michael-mccune-joel-speed-red-hat-bridget-kromhout-microsoft-jesse-butler-aws-bowei-du-google&quot;&gt;How Will Customized Kubernetes Distributions Work for You? A Discussion on Options and Use Cases&lt;/a&gt; panel. It should go
without saying that I love the meet and greet. It’s been a high point for me for several KubeCons now and I volunteered to work the
first hour as a greeter, and then would join the second hour in my SIG Cloud Provider capacity to talk with attendees.&lt;/p&gt;

&lt;p&gt;The meet and greet went great, I was able to meet many new folks and help them to find the communities where they could learn and connect
with others. After my official duties had ended, I got a good slice of time to talk with people about SIG Cloud Provider and how they could
get involved. One inspiring discussion that came out of this was with a gentleman who had done documentation work across three languages!
He was connected with developer communities in Vietnam and was curious how he could connect those communities with the wider Kubernetes
community. It was great to talk about how Vietnamese cloud providers will be able to join our efforts and really how they can gain
the benefits of the common cloud provider framework we have developed. It made me feel good to know that the work we are doing in the
open source can truly reach all communities so that they can join the great activity that is happening around Kubernetes.&lt;/p&gt;

&lt;p&gt;I had to run at the end of the meet and greet to make the second panel I was doing for this KubeCon. I made it back to the panel room with a
little time to spare, but as it turned out the panel before ours (which was on Node Lifecycle APIs) was packed to the rafters and we had to wait
outside for the room to empty.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-eu-2026-cloudprovider-panel.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Cloud Provider panel&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The panel went off well and we had about 150 people show up to hear us talk about the idea of Kubernetes distributions. I wasn’t sure
what to expect when we proposed this panel. SIG Cloud Provider has been interested in this notion to help us achieve some testing goals, but
also to work towards the future where there is a clear method for cloud provider specific bits to be included with a Kubernetes installation.
Whether that installation happens from kOps, Cluster API, or some other tooling, we would like there to be a common set of guides to follow.
I was touched by how many people approached me afterwards to thank the SIG for diving in to this topic. We’ll see how things progress.&lt;/p&gt;

&lt;p&gt;In a similar fashion as day 1, I spent some time at the end of the day in the solutions showcase checking out the project pavilion and connecting
with friends at different companies who were working booth duty.&lt;/p&gt;

&lt;h2 id=&quot;kubecon-day-3-thursday-march-26&quot;&gt;KubeCon Day 3, Thursday March 26&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-eu-2026-bootc.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Bootc demo&quot; /&gt;&lt;/p&gt;

&lt;p&gt;By Thursday my tanks were running on near empty, but I wanted to get around to the project pavilion again to check out the projects I was less
familiar with, and I was rewarded richly. I spent time talking with my colleague Thilo at the Flatcar Linux booth. We are attempting to improve
the state of ignition in Cluster API and I’m hopeful that we’ve made some solid designs for work that we can do this year. I also spent time talking
with the Bootc project team as well. &lt;a href=&quot;https://bootc.dev/bootc/&quot;&gt;Bootc&lt;/a&gt; is a really cool project that unlocks a great deal of potential for managing and upgrading
ostree style operating system images. I’m sure it can do so much more than that, but the demo that my colleagues from Red Hat gave showed how
you could perform rolling upgrades and downgrades to an application embedded as part of an ostree image. I am absolutely planning to setup a
home lab for playing around with bootc, especially since I think it will help with the ignition/Cluster API work I’d like to do.&lt;/p&gt;

&lt;h3 id=&quot;a-note-on-ai&quot;&gt;A note on AI&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-eu-2026-ai.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;ai doggo&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I would be neglectful if I didn’t mention the presence of AI at KubeCon. There were plenty of talks and demonstrations with people showing off
how they are using AI to build value and also integrate AI into their products. Additionally there was no shortage of talks about how AI
is affecting, and will continue to influence, the software development process. I found the discussion good and the amount of AI related talks
only a little oppressive. One thing I noted that is increasing from the hallway track perspective is that many people are getting tired of
seeing AI related talks and want to see more technical Kubernetes information. I saw several non-AI sessions that were packed to capacity,
and my informal discussions with people seemed to indicate a weariness of AI and also a desire to find spaces where AI was not being
discussed. Hopefully this is a sign that the community is large enough that we need more equal represenatation of ideas at KubeCon. AI has
made its mark and will continue to evolve, but it is not the Ur-solution for all problems.&lt;/p&gt;

&lt;h2 id=&quot;thoughts-and-takeaways&quot;&gt;Thoughts and takeaways&lt;/h2&gt;

&lt;p&gt;KubeCon Amsterdam was a tremendous success for me. I had an excellent time connecting with friends and peers from across the industry
and got to participate in some amazing discussions. I am optimistic about the future of Kubernetes and I look forward to how the
community will continue to grow and evolve.&lt;/p&gt;

&lt;p&gt;I am hesitant about the continued usage of AI in our software development processes. I can see the benefits but I also see the harm that
it can do to the people involved in the process. I think the most important thing for us to remember is that the people who make up this
community are the most important element, and we should ensure that regardless of the technological changes happening we continue to 
respect and protect the community members.&lt;/p&gt;

&lt;p&gt;In brief, here are some thoughts I had:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Scheduling continues to be a giant topic and it is only becoming more complex.&lt;/li&gt;
  &lt;li&gt;AI continues to grow and we are now dealing with how to incorporate this style of development into our open source processes.&lt;/li&gt;
  &lt;li&gt;Digital sovereignty and on-premises computing are growing in popularity once again, Kubernetes represents a game changer for people wanting open source solutions that they can own.&lt;/li&gt;
  &lt;li&gt;Even with the global economy in an uncertain state, there continues to be investment in cloud services and platform engineering.&lt;/li&gt;
  &lt;li&gt;Focus on community health and safety has never been higher, and I fully endorse this activity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whelp, that’s about it for my thoughts on this retrospective. I hope I can continue to attend KubeCons and be part of this community.
It brings me great joy and satisfaction to know that what we are building, in the open, can help make the world a better place. It seems
challenging to remember this given the state of the world these days, but I count myself as an eternal optimist in this respect. If you
made it this far, thank you, be safe out there, and as always happy hacking!&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-eu-2026-outro.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;rainy RAI&quot; /&gt;&lt;/p&gt;

</description>
				<pubDate>Wed, 08 Apr 2026 00:00:00 +0000</pubDate>
				<link>https://notes.elmiko.dev/2026/04/08/kubecon-eu-2026-retrospective.html</link>
				<guid isPermaLink="true">https://notes.elmiko.dev/2026/04/08/kubecon-eu-2026-retrospective.html</guid>
			</item>
		
			<item>
				<title>KubeCon North America 2025 Retrospective</title>
				<description>&lt;p&gt;Well, it’s been about a year since I updated this blog, so why not get back into things by giving my
retrospective on this fall’s KubeCon North America. TL;DR fewer people, slightly less AI hype, and
loads of good conversations.&lt;/p&gt;

&lt;p&gt;KubeCon North America for 2025 was hosted in Atlanta Georgia in the United States. The offical “KubeCon”
was on the 11th through the 13th of November, with the CNCF co-located events happening on Monday the
10th, and the Kubernetes Maintainer Summit happening on Sunday the 9th. I was lucky enough to travel to
Atlanta for all the events, but the start of my journey was slightly delayed due to the government shutdown
in the United States. It’s worth mentioning the political activity in the United States, without going into
too much detail, as I believe the current political climate contributed to my perception of a smaller population
at this KubeCon.&lt;/p&gt;

&lt;p&gt;But, even though my flight was delayed by 6 hours, I was able to finally make it to Atlanta. I heard stories
from many friends and colleagues who had flights delayed, and in some cases cancelled, which led to a fun situation
where most of us were asking each other “how long was your delay?” as we met at the conference.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-na-2025-selfie.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Selfie at Red Hat booth&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;kubernetes-maintainer-summit-sunday-november-9&quot;&gt;Kubernetes Maintainer Summit, Sunday November 9&lt;/h2&gt;

&lt;p&gt;As is tradition, maintainer summit is my favorite part of the conference. This year, we started with the summit
occurring before any other event. There was a small gathering at the conference hall and we had a day of
conversations, presentations, and unconference sessions; this year including a hot meal for lunch! (thank you con team &amp;lt;3)&lt;/p&gt;

&lt;p&gt;We started with the keynotes and getting warmed up for the day, it’s always nice starting out with some laughs
and looking around the room to see who you might recognize. As I noted, the attendance seemed a little lower
than previous summits and some of that was due to the travel challenges, and the rest I believe was due to folks
not wanting to engage with the United States customs and border control.&lt;/p&gt;

&lt;p&gt;I started my summit by attending the &lt;a href=&quot;https://sched.co/2B5Lm&quot;&gt;TAG Workshop&lt;/a&gt; hosted by Karena Angell, Mario Fahlandt, Brandt Keller,
and Dylan Page. TAG stands for “Technical Advisory Groups” and this was my first experience learning how these groups
operate and what function they serve in the community. I was glad to learn that they help with cross-project efforts
and in areas where greater coordination is needed for initiatives that will affect several technology groups. A large
part of the reason I wanted to learn more about the TAGs is that &lt;a href=&quot;https://www.youtube.com/watch?v=WeWQqQM6kjM&quot;&gt;the testing work I am interested in doing&lt;/a&gt;
is going to involve some level of standardizing around how Kubernetes is deployed for different cloud providers, something
we have been calling “distributions” in SIG Cloud Provider.&lt;/p&gt;

&lt;p&gt;As happens at these events, the discussion I was having about kubernetes distributions continued past the end time for
the TAG Workshop and I ended up spending my next session in the &lt;em&gt;hallway track&lt;/em&gt; talking with folks about the idea. Basically,
it would be very convenient for cloud provider testing to have the notion of a &lt;em&gt;distribution&lt;/em&gt; of kubernetes. This would mean
including a reference topology (eg 3 control plane nodes instead of 1, etc) and also including provider-specific components
depending on the distribution. For example, on OpenStack, the kubernetes distribution would include the cloud controller manager
for OpenStack, plus any other storage or networking components required for that platform. This would help us in arriving
at a destination where tests could more easily select the platform and also the components that need testing. In the end
we will have provider agnostic tests that can operate on any provider, but which will also exercise provider-specific behavior
through provider interfaces.&lt;/p&gt;

&lt;p&gt;After lunch, I went to some of the unconference sessions starting with
&lt;a href=&quot;https://sched.co/28aDq&quot;&gt;Better Together, Strengthening Inter-Project Collaboration &amp;amp; Developer Experience Across the CNCF Ecosystem&lt;/a&gt;
proposed by Yacine Kheddache and Colin Griffin. This was a fun session where we talked about how to better share
information in the CNCF community, especially for the purposes of helping increase cross-project collaboration and knowledge.
As with many unconference sessions, the discussion really got going once the time was almost out, but I had a revelation while
participating in the discussion. Essentially, we could use more project manager community members. This is something I have
struggled with in the past, but it was purely based on people reaching out to me for mentorship as project managers. I never
had a good answer for this style of collaboration in open source projects, but in this Kubernetes community I am now seeing
a clear place for these roles. In the future I am going to guide project managers to become more involved in the &lt;em&gt;social glue&lt;/em&gt;
that holds our community together. I think there is a strong place for contribution in the form of people who can go between
projects to help advocate for projects needs in differing venues. &lt;em&gt;Perhaps another blog post topic for the future ;)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The last session I attended was the one on &lt;a href=&quot;https://sched.co/28aE2&quot;&gt;Making the Kubernetes CI/Infra better&lt;/a&gt;. Now, this wasn’t technically the
last session of the day, but the discussion I got into towards the end kept going into the hallway track, and by the time we
were done it was time for the group picture. This session was great though, especially given my desire to chase the idea of Kubernetes
distributions. I learned a lot in this session and came away with a new excitement about building a Cluster API deployer for
&lt;a href=&quot;https://github.com/kubernetes-sigs/kubetest2&quot;&gt;kubetest2&lt;/a&gt;. We’ll see where it goes, but I’m optimistic about the future of Kubernetes cloud provider testing.&lt;/p&gt;

&lt;p&gt;After a day of visiting with colleagues and talking about the future of Kubernetes we adjourned for some much needed relaxation.&lt;/p&gt;

&lt;h2 id=&quot;openshift-commons-and-co-located-events-monday-november-10&quot;&gt;OpenShift Commons and co-located events, Monday November 10&lt;/h2&gt;

&lt;p&gt;It’s going to be a big week, so I need to start with a solid breakfast. We don’t have Waffle House in my state (Michigan), and
it is a national institution so I had to visit. &lt;em&gt;(I actually went almost every day XD)&lt;/em&gt;
&lt;img src=&quot;/img/kubecon-na-2025-wh.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Waffle House&quot; /&gt;&lt;/p&gt;

&lt;p&gt;On Monday, I had the pleasure of visiting &lt;a href=&quot;https://commons.openshift.org/&quot;&gt;OpenShift Commons&lt;/a&gt; as an attendee instead of a speaker or employee, which
was a change from previous years. I love the Commons and as a Red Hat event I always feel a little like a celebrity when
I visit. I was able to spend some face time with colleagues and even got to talk with a customer or two. It is very gratifying
to hear about our customers’ journeys and how our efforts become solutions for them.&lt;/p&gt;

&lt;p&gt;After spending the morning at Commons, I returned to the convention center to watch my teammate Mansi Kulkarni give
a talk on &lt;a href=&quot;https://sched.co/28D71&quot;&gt;Windows Container Monitoring Demystified: OpenTelemetry in Action&lt;/a&gt; with her peer Ritika Gupta. As
I don’t normally work on this area of Kubernetes, it was interesting to hear how Windows containers work and also
how they are monitored by users. It’s a wild world combining container technology with the Windows kernel.&lt;/p&gt;

&lt;p&gt;I didn’t have a ton going on Monday aside from catching up with folks, so I spent some time reading code and planning
for KubeCon proper.&lt;/p&gt;

&lt;h2 id=&quot;kubecon-day-1-tuesday-november-11&quot;&gt;KubeCon Day 1, Tuesday November 11&lt;/h2&gt;

&lt;p&gt;After the keynotes, I availed myself of a walk through the exhibitor hall and then found some talks to watch. I also
stopped many times in the hallway track to catch up with folks I had not seen in several months.&lt;/p&gt;

&lt;p&gt;One of the big talks on Tuesday was Corey Quinn’s &lt;a href=&quot;https://sched.co/27FVz&quot;&gt;The Myth of Portability: Why Your Cloud Native App Is Married To Your Provider&lt;/a&gt;.
I was only a little familiar with Corey’s work and I was not at all ready for his presentation style. This talk was well polished, and
well delivered. Corey has a unique style that trends more towards comedy than analysis. His message was astute though, about the
intracacies and challenges of delivering applications in hybrid environments and what it means to make an application truly portable
across clouds. I found his analysis of the problem to be spot on, his solutions left me wanting more though, and his delivery was downright
mean-spirited. While he made good points, I couldn’t help but feel insulted as he emphasized again and again how the work of myself and
my peers was “shit”.&lt;/p&gt;

&lt;p&gt;Thankfully, after the mean-spirited pillory of the previous talk, I was delighted to see Taylor Dolezal and Erica Hughberg’s talk
&lt;a href=&quot;https://sched.co/27FVV&quot;&gt;The Missing Manual for Open Source Community Sustainability&lt;/a&gt;. I found this talk energizing and informative about how we
build more sustainable processes into our communities. I really enjoyed how Taylor and Erica broke down the personas within software
communities, and then provided techniques for engaging with those people. I left with a full page of notes from this session.&lt;/p&gt;

&lt;p&gt;The last session I attended on Tuesday was &lt;a href=&quot;https://sched.co/27NmS&quot;&gt;Beyond the Code: How the Kubernetes Steering Committee Tackles the Hard, Non-Technical Problems&lt;/a&gt;
hosted by Antonio Ojea, Benjamin Elder, and Maciej Szulik. I have been curious about the steering committee and how they work; this
session delivered exactly what I desired. I learned a great deal about how the committee is structured in relation to the SIGs and WGs,
and also how they operate. It was nice to hear stories from the current committee members about how they ran for election and what the
experience has been like for them. Also, very cool to hear how the kubernetes community keeps itself healthy and on-track.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-na-2025-powertrio.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Antonio, Ben, and Maciej&quot; /&gt;
Antonio, Ben, and Maciej, what a kubernetes power trio!&lt;/p&gt;

&lt;h2 id=&quot;kubecon-day-2-wednesday-november-12&quot;&gt;KubeCon Day 2, Wednesday November 12&lt;/h2&gt;

&lt;p&gt;Wednesday was a big day for me as I was part of a talk, so I spent the morning preparing. At 11:30am it was time for us to present
&lt;a href=&quot;https://sched.co/27FZc&quot;&gt;Maximizing Global Potential: Cost-Optimized, High-Availability Workloads Across Regions&lt;/a&gt;. I joined Praseeda Sathaye and
Jingkang Jiang, with shoutouts to Wei Jiang, to present this talk and we had an absolute blast. We talked about how kubernetes can
be deployed globally across regions and providers to deliver highly available infrastructure. We demonstrated how the Karmada,
Cluster API, and Karpenter projects can be utilized to build a single multi-cluster workload pipeline. We had about 60-70 people in the room and got
some good questions.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-na-2025-ourtalk.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;JK, me, Praseeda&quot; /&gt;
Jingkang, myself, and Praseeda, another power trio!&lt;/p&gt;

&lt;p&gt;After our talk, I quickly moved to the &lt;a href=&quot;https://sched.co/28xeb&quot;&gt;SIG/WG Meet &amp;amp; Greet&lt;/a&gt;. The meet and greet is another of my favorite activities at
KubeCon. It’s a great chance to hang out with maintainers and learn about kubernetes and the community, and to pick people’s
brains about what is next, or how to get involved. I absolutely recommend it to anyone who wants to learn more about the
maintainer community.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-na-2025-mng.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;SIG meet n greet&quot; /&gt;
It was so good that the CNCF photographers caught us! (&lt;a href=&quot;https://www.flickr.com/photos/143247548@N03/54921155691/in/album-72177720330018728&quot;&gt;source&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;I was talking with Josh Berkus at the end of the meet and greet and we both noted how the line for the puppy petting event
was very long and also right outside the meet and greet. This could perhaps be a good technique for bringing more folks to the meet
and greet next time. Just put the puppy pit at the back of the meet and greet. XD&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-na-2025-puppy.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Puppy pit at kubecon&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Next it was on to see my colleague Jose Valdes co-present with Mark Rosetti on the &lt;a href=&quot;https://sched.co/27NoC&quot;&gt;Kubernetes SIG Windows Updates&lt;/a&gt;. As I
noted earlier, I don’t normally work on Windows (although I did in a previous life), but I learned some cool stuff in this presentation
and I got to support my teammate. I find it interesting to learn how Windows is able to meet the OCI standard through its various
process APIs. I’m not sure I’m ready to return to that world (I’m not), but I enjoyed learning more about this corner of the
kubernetes ecosystem.&lt;/p&gt;

&lt;p&gt;After all that, I still had some gas left in the tanks and I was eagerly waiting for Justin Santa Barbara and Ciprian Hacman’s talk
&lt;a href=&quot;https://sched.co/27Nlp&quot;&gt;The Next Decoupling: From Monolithic Cluster, To Control-Plane With Nodes&lt;/a&gt;. In this talk, Justin and Ciprian were
discussing some experimentation they would like to do in &lt;a href=&quot;https://github.com/kubernetes/kops&quot;&gt;kOps&lt;/a&gt; to add more support for Cluster API and Karpenter. I find these
ideas to be exciting and a good path for improving our testing efforts across platforms. It filled my head with more
thoughts about extending kubetest2 and building a proof of concept on top of their work. Also, they produced a logo for Karpenter
that I absolutely adore, and I want to make it the Karpenter Cluster API logo:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-na-2025-karpcapi.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Alternate karpenter logo&quot; /&gt;&lt;/p&gt;

&lt;p&gt;wow, what a Wednesday!&lt;/p&gt;

&lt;h2 id=&quot;kubecon-day-3-thursday-november-13&quot;&gt;KubeCon Day 3, Thursday November 13&lt;/h2&gt;

&lt;p&gt;By Thursday I was nearly completely wiped out, but I was part of a panel that would be the last session of the day. So, I pushed forward
and made some connections. I spent most of my day talking with various colleagues about the work we are doing and what we would like
to do in the next 6-12 months.&lt;/p&gt;

&lt;p&gt;Then, at the end of the day, I was delighted to join my fellow SIG Cloud Provider co-chair Bridget Kromhout, as well Joel Speed, Walter
Fender, and Jesse Butler to have a panel about &lt;a href=&quot;https://sched.co/27NoX&quot;&gt;SIG Cloud Provider Deep Dive: Expanding Our Mission&lt;/a&gt;. I had a great time and I’m
fairly sure the other speakers did as well. We had a nice chance to talk about how the SIG is working to create cross-platform building
blocks for the future of kubernetes. We had a nice attendence for our event and a good discussion among the panelists. I’m optimistic about
the future, and it seemed like our discussion and energy permeated into the audience with several nice comments directed towards the SIG at
the end.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-na-2025-sigcp.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;SIG Cloud Provider&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;thoughts-and-takeaways&quot;&gt;Thoughts and takeaways&lt;/h2&gt;

&lt;p&gt;This KubeCon seemed smaller than years past. I can only assume this was due to travel and the behavior of the government of the
United States. I was nervous to travel, especially given the shutdown and delays, but I am glad I made it. I love seeing my
friends in the community and I generally had a positive experience in Atlanta. I also walked away with many new ideas and that
familiar sense of excitment that percolates when I’ve got plans a’brewing.&lt;/p&gt;

&lt;p&gt;Some thoughts I had:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;While “AI” was still big, it seemed less big than previous KubeCons.&lt;/li&gt;
  &lt;li&gt;Many of my peers are now using LLM-based solutions for the smaller tasks. Nearly all of these efforts are being forced
by their employers and the results seem mixed at best.&lt;/li&gt;
  &lt;li&gt;Resource allocation and placement is still a &lt;em&gt;big deal&lt;/em&gt;. There continues to be more and more work done on exposing better
metadata for workload placement and optimization.&lt;/li&gt;
  &lt;li&gt;There is a wealth of younger people looking to become involved with Kubernetes. We need to hire more people!&lt;/li&gt;
  &lt;li&gt;Kubernetes is not slowing down, nor does it appear to be entering the maintenance mode just yet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s about it from me. Another KubeCon in the books. Hopefully this small report has helped to give one window into KubeCon.
I hope to make to Amsterdam, if you see me come say hi (and don’t be surprised if I give a quizical look at first). I’ll try
to wear my fedora again, it seemed to be a good way to find me lol. As always, stay safe out there and happy hacking!&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-na-2025-atlanta.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Atlanta&quot; /&gt;&lt;/p&gt;

</description>
				<pubDate>Mon, 17 Nov 2025 00:00:00 +0000</pubDate>
				<link>https://notes.elmiko.dev/2025/11/17/kubecon-na-2025-retrospective.html</link>
				<guid isPermaLink="true">https://notes.elmiko.dev/2025/11/17/kubecon-na-2025-retrospective.html</guid>
			</item>
		
			<item>
				<title>KubeCon North America 2024 Retrospective</title>
				<description>&lt;p&gt;I attended KubeCon North America 2024 in Salt Lake City last week and this is my retrospective on the trip.&lt;/p&gt;

&lt;p&gt;KubeCon North America for 2024 was hosted in Salt Lake City from November 13th through the 15th, with three days of events before the conference. In addition to KubeCon, I attended the Cloud Native Rejekts on Sunday the 10th, the Kubernetes Contributor Summit on Monday the 11th, and OpenShift Commons on Tuesday the 12th. The weather was mostly nice, with a little rain on Tuesday, with temperatures generally cool in the mid 30s to high 40s Fahrenheit. Salt Lake City is easy to get around, it has generally large blocks and roads in the downtown area, there is also public transportation, and an amazing view of the mountains from just about everywhere.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-na-2024-outfront.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Selfie out front of kubecon&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;cloud-native-rejekts-sunday-november-10&quot;&gt;Cloud Native Rejekts, Sunday November 10&lt;/h2&gt;

&lt;p&gt;Cloud Native Rejekts is a conference that bills itself as “… the b-side conference giving a second chance to the many wonderful, but rejected, talks leading into KubeCon + CloudNativeCon”. It is usually a great place to see high quality talks in a smaller setting than KubeCon, and an opportunity to meet up with other people to talk about Kubernetes before the &lt;em&gt;main event&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This year, I was presenting a talk with colleague David Morrison from the Applied Computing Research Labs titled “Karpenter and Cluster Autoscaler: A data-driven comparison”. We had about 20-25 people in the audience, and it went fairly well considering we had some issues with latency on the slide display.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://cfp.cloud-native.rejekts.io/cloud-native-rejekts-na-salt-lake-city-2024/talk/CZ9VGR/&quot;&gt;Read the abstract&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.youtube.com/live/M1R05c1pWmc?si=H5jQnHiflrMDRuaO&amp;amp;t=14609&quot;&gt;Watch the replay&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On Sunday, there were probably around 200 people in attendance at Rejekts. There were 2 presentation areas: the main theater, and the flex room. Most talks saw people filling the rooms and I noted several with crowds standing in the hall to catch the topics (the eBPF talks were popular). I met several old friends during the social time for Rejekts and it was a great “pre-game” for KubeCon. If you are ever coming to a KubeCon and have the inclination and opportunity to arrive early, Cloud Native Rejekts is a free event and the talks are focused on technical topics, I recommend checking it out at least once.&lt;/p&gt;

&lt;h2 id=&quot;kubernetes-contributor-summit-monday-november-11&quot;&gt;Kubernetes Contributor Summit, Monday November 11&lt;/h2&gt;

&lt;p&gt;The contributor summit is my favorite part of KubeCon. The amount of high-bandwidth conversations and learning that I do during the summit is unparalleled for me at any other Kubernetes event. This year was no different. I’m not sure on the total attendance numbers for the contributor summit, but I would not be surprised if there were a few hundred people there throughout the day. We had several general sessions, including an awards ceremony, and breakout rooms with pre-planned and unconference topics.&lt;/p&gt;

&lt;p&gt;I felt that one of the big topics for this summit was getting to know and understand the steering committee. We had a good panel session with the committee and it generally seemed like they wanted the contributor community to understand what the steering committee does, and how we can lift more voices to join the committee. I found this to be a good topic of self-awareness for the Kubernetes contributor community, and I continue to be impressed at the thoughtfulness of this community in terms of attempting to keep itself healthy, active, and inclusive.&lt;/p&gt;

&lt;p&gt;There were 2 talks I attended during the summit that I would like to call attention to:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://kcsna2024.sched.com/event/1nSgo/official-kubernetes-crds-where-to-from-here&quot;&gt;Official Kubernetes CRDs: Where to from here ? - Nick Young, Isovalent, Rob Scott, Google&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://kcsna2024.sched.com/event/1nSjo/unified-framework-for-unit-integration-and-e2e-testing&quot;&gt;Unified framework for unit, integration, and E2E testing - Patrick Ohly, Intel&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I found the talk on CRDs interesting because for many years now the Kubernetes community has been attempting to address several issues around CRDs (versioning and migrating to name two) and this talk was moving the ball forward on those issues. I also found the talk on testing to be interesting because the author is proposing a test framework that should be built specifically for Kubernetes to address the specific needs of test writers. There seems to be a good opportunity to help push the Kubernetes tests towards a more unified approach, and bring in more contributors who might want to join the testing efforts.&lt;/p&gt;

&lt;p&gt;Aside from the talks, a big part of attending the contributor summit are the project meetings. I am an active contributor to the Cluster API project and I attended their project update session. This year was not as well-attended as previous years with only around 10-20 people in attendance, but we had a great discussion about the future of the project. Some of the topics we covered during the discussion were the in-place upgrades proposal and status, and etcd resiliency in Cluster API clusters.&lt;/p&gt;

&lt;h2 id=&quot;openshift-commons-tuesday-november-12&quot;&gt;OpenShift Commons, Tuesday November 12&lt;/h2&gt;

&lt;p&gt;OpenShift Commons is a Red Hat sponsored event where we talk about all things Red Hat and OpenShift. I volunteered to help with operations and also to be part of the round table discussions representing the OKD community with my peers Amy Marrich and Jaime Magiera.&lt;/p&gt;

&lt;p&gt;I love attending the OpenShift Commons, not only because I work on OpenShift and it’s a great opportunity to meet with users, customers, and partners, but also because it’s a great chance for me to spend some time with other Red Hatters. It was a really nice event and it seemed like we were nearly packed to capacity. I’m not sure how many people were there but it had to be a few hundred.&lt;/p&gt;

&lt;p&gt;The round table discussions were interesting and I ended up talking with a few folks about how using OKD (the community supported version of OpenShift) could help their operations team in &lt;em&gt;sketching out&lt;/em&gt; features they would like to see in OpenShift someday. The idea being that if you would like to see a feature in OpenShift, and you have some technical understanding around OpenShift and the feature, then you could use OKD to add the feature and demonstrate a proof of concept. This seemed to resonate with the people I talked with, although we all admitted that there is a significant engineering resource demand to create these types of demonstrations.&lt;/p&gt;

&lt;p&gt;Commons also had one of the best pieces of OpenShift swag I have ever had the pleasure of snagging: OpenShift keycaps!&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-na-2024-keycaps.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;openshift keycaps in a bucket&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;kubecon-day-1-wednesday-november-13&quot;&gt;KubeCon Day 1, Wednesday November 13&lt;/h2&gt;

&lt;p&gt;Wednesday brought the official start of KubeCon. I followed the herd (we were told attendance reached 9,200 people this year) to the first day keynotes and with that, KubeCon had begun. Two of the big themes for this KubeCon were AI, and security, with the first two days being dedicated to those topics respectively. The first day keynotes seemed well attended and there was a buzz in the air that was familiar. Immediately after the keynotes was the stampede to the exhibit hall.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-na-2024-curling.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;inflatable curling strip&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The exhibitor hall was fun, as usual, with some special highlights being the pickleball court and the inflatable curling. It didn’t feel over-crowded or empty, and the times I visited there were plenty of people walking around. The project pavilion was also in the exhibitor hall and I ended up doing an unscheduled impromptu demonstration of the Karpenter Cluster API project, which was fun but also chaotic. This demonstration led to several &lt;em&gt;hallway track&lt;/em&gt; conversations on Wednesday about Karpenter and Cluster API. While AI and security were the conference themes, one of my themes for the week was definitely Karpenter as I got into many more conversations about it than I anticipated.&lt;/p&gt;

&lt;p&gt;I saw a bunch of talks on Wednesday, and I would like to call attention to a couple that I think are worth watching when they are posted:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://sched.co/1i7ke&quot;&gt;Architecting Tomorrow: The Heterogeneous Compute Resources for New Types of Workloads - Alexander Kanevskiy, Intel Finland&lt;/a&gt; – &lt;a href=&quot;https://www.youtube.com/watch?v=jyovyLafMOs&quot;&gt;Watch on YouTube&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://sched.co/1i7mE&quot;&gt;From Observability to Performance - Nadia Pinaeva, Red Hat &amp;amp; Antonio Ojea, Google&lt;/a&gt; – &lt;a href=&quot;https://www.youtube.com/watch?v=uYo2O3jbJLk&quot;&gt;Watch on YouTube&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first talk on heterogeneous compute resources gave a window into the future of exposing more details about processors to help make scheduling and processing more efficient. Alexander gave a nice overview of how different workloads can be adversely affected by some processor architectures. In a world where nanoseconds can make a difference, this is a great talk to understand how to identify and minimize those bottlenecks.&lt;/p&gt;

&lt;p&gt;The second talk was a deep breakdown of how networking metrics can be used to identify performance limiters and speed bumps in your infrastructure. Understanding networking failures is difficult enough without all the layers that cloud native infrastructures add. Being able to see real world uses of the metrics alongside the methodology for understanding the implications of those metrics was enlightening for me.&lt;/p&gt;

&lt;p&gt;Near the end of the day, I ended up getting into a deep discussion with some friends from SIG Storage (shoutouts to Hemant, Jan, and Michelle). They had been chatting all day about a storage issue related to CSI drivers and the Cluster Autoscaler. It was an interesting discussion and I think after about an hour or two we had made some progress on possible solutions, now if only we could find the time to work on them!&lt;/p&gt;

&lt;p&gt;Wednesday evening was also the “Booth Crawl” at the exhibitor hall, but as I had dinner plans with my colleagues from Red Hat, I skipped out for the evening.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-na-2024-booth.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;red hat booth&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;kubecon-day-2-thursday-november-14&quot;&gt;KubeCon Day 2, Thursday November 14&lt;/h2&gt;

&lt;p&gt;Day 2, I’m exhausted but driven forward by excitement and the goal of delivering a talk on Friday.&lt;/p&gt;

&lt;p&gt;For the first time in my KubeCon history, I attended a workshop on the DEI track. I saw this title in the schedule and, given my interest in building healthy open source communities, I felt that I had to attend:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://sched.co/1pee5&quot;&gt;Be Part of the Solution: Cultivating Inclusion in Open Source - Allyship Workshop&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I was rewarded with an hour of discussions and group activities where I got to hear about some common issues related to inclusion, and then got to spend time talking with people about their experiences. I had a good time and learned a little more about what I can do to help build more inclusive communities, and what to look for when things aren’t quite going right.&lt;/p&gt;

&lt;p&gt;I do a lot of work with the Cluster Autoscaler and Karpenter projects, so I was keenly interested to see the SIG Autoscaling update, delivered by my friend from the Cluster API community, Jack Francis. It was a nice overview covering all the projects that the SIG sponsors, how users can get involved, and some crystal ball gazing about what is coming next for the SIG. If you are interested in node or pod autoscaling, definitely watch the recording.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://kccncna2024.sched.com/event/1howV/sig-autoscaling-projects-update-jack-francis-microsoft&quot;&gt;SIG Autoscaling Projects Update - Jack Francis, Microsoft&lt;/a&gt; – &lt;a href=&quot;https://www.youtube.com/watch?v=3fr2J3G1s1U&quot;&gt;Watch on YouTube&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Something I love being a part of at KubeCon is the SIG Meet n’ Greet. It is an occasion for the Kubernetes SIGs (Special Interest Groups) to make some space and do a little self-promotion. I represented SIG Cloud Provider and had a number of interesting discussions with people who are interested in what the SIG does, and also how they can build cloud controllers for their own infrastructure offerings. I also had a few people approach me about Karpenter related topics, which was nice.&lt;/p&gt;

&lt;p&gt;My friend and teammate Joel Speed also had a talk on Thursday about CEL validation budgets. If you are using CEL with your Kubernetes API designs, I think it’s worth watching the recording to learn the deep details about how validation budgets are calculated.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://sched.co/1i7nv&quot;&gt;Exceeded Your Validation Cost Budget? Now What? - Joel Speed, Red Hat&lt;/a&gt; – &lt;a href=&quot;https://www.youtube.com/watch?v=IfaPAqDfJHk&quot;&gt;Watch on YouTube&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I ended my day at the conference by attending a talk on the OpenCost project. I am keenly interested in exploring how the Cluster API project could expose information about instance pricing. I’m not quite sure what the best way to do this is yet, but I have been wanting to explore OpenCost to see if it might be appropriate for Cluster API. I’m still not sure, but the talk I watched did help me understand some basics about OpenCost. It wasn’t quite what I was expecting, but I found it interesting as a primer.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://sched.co/1i7oQ&quot;&gt;Measuring All the Costs with OpenCost Plugins - Alex Meijer, Stackwatch&lt;/a&gt; – &lt;a href=&quot;https://www.youtube.com/watch?v=yLAx2z4FqSk&quot;&gt;Watch on YouTube&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;kubecon-day-3-friday-november-15&quot;&gt;KubeCon Day 3, Friday November 15&lt;/h2&gt;

&lt;p&gt;Last day of KubeCon, for now.&lt;/p&gt;

&lt;p&gt;I didn’t have a lot on my agenda for Friday aside from co-presenting the SIG Cloud Provider maintainer track talk. But, I was happily surprised to see this talk about flaky tests and continuous integration. It gave me a better window into how the Kubernetes testing infrastructure is configured and deployed. I am keenly interested in this because I would like to improve the state of testing for cloud controllers. And, as luck would have it, this talk about flaky CI was happening a few slots before our talk related to testing. I highly recommend watching the recording if you have an interest in Kubernetes continuous integration testing.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://sched.co/1hoxc&quot;&gt;Achieving and Maintaining a Healthy CI with Zero Test Flakes - Antonio Ojea, Michelle Shepardson &amp;amp; Benjamin Elder, Google&lt;/a&gt; – &lt;a href=&quot;https://www.youtube.com/watch?v=hl3jjCTTL50&quot;&gt;Watch on YouTube&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fridays at KubeCon are always tough for me as I am usually exhausted but also want to catch up with people before we all leave. This year was no different. I spent much of my time walking around and having discussions with people (shoutout to Kevin on our talks of what will disrupt Kubernetes and the state of homelab clusters). But, as the day, and con, were winding down it was time to deliver our talk for SIG Cloud Provider:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://sched.co/1hoyJ&quot;&gt;Building a More Resilient Future with Advanced Cloud Provider Testing - Michael McCune, Red Hat &amp;amp; Bridget Kromhout, Microsoft&lt;/a&gt; – &lt;a href=&quot;https://www.youtube.com/watch?v=5FKMFlooC6c&quot;&gt;Watch on YouTube&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’m happy and proud to say that the talk went well despite the standard technical shenanigans at the beginning. I also want to say a big thank you to Bridget as well. I have had the pleasure of co-presenting with Bridget a few times now and she is a talented and amazing person to be on stage with. I was humbled to hear the audience’s reactions to our presentation and I’m so happy that people enjoyed themselves and appreciated our delivery. Looking forward to having another opportunity like this in the future.&lt;/p&gt;

&lt;p&gt;And with that, I started to make my way out of the building for the last time at this KubeCon. I did see a few people on the way out, and got my last hugs and well-wishes in. It was an amazing and exhausting experience as always, and I’m hopeful I can attend in London.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/kubecon-na-2024-leaving.jpg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;looking down an empty hall&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;thoughts-and-takeaways&quot;&gt;Thoughts and takeaways&lt;/h2&gt;

&lt;p&gt;KubeCon is a bustling place filled with ideas and excitement, and this edition was no different. Here are some of my unvarnished thoughts about my experiences:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;“AI” workloads are still white hot, but I do hear people wondering aloud when, and if, we will see profitable business models emerge from this trend.&lt;/li&gt;
  &lt;li&gt;There are more clouds coming. As a SIG Cloud Provider co-chair, and Red Hat engineer on our cluster infrastructure team, I have a decent vantage point for viewing cloud integrations with Kubernetes. I had several direct conversations with people who are interested in writing cloud controllers for their clouds, and talked about wanting to integrate with Cluster API. This made me feel good about the health of that ecosystem.&lt;/li&gt;
  &lt;li&gt;More people talked to me about Karpenter Cluster API than I expected. I had several in-depth conversations with people about this project and how it works. There was a genuine excitement that Karpenter could work with Cluster API to unlock its features on all the available platforms. I was happy to hear this and tried to get a sense for what people are wanting. If we can get accurate cost information available in Cluster API, I have a feeling the Karpenter provider will get more attention.&lt;/li&gt;
  &lt;li&gt;Dynamic Resource Allocation (DRA) is almost as hot as “AI”. The push for more GPU-centric applications of DRA was present all throughout the con, with several talks being dedicated to DRA and GPU workloads. I think this just speaks to the popularity of workloads that require specific resources. And while today the talk is mostly about GPUs, I look forward to the day when we are talking about DRA for everything from CPUs and memory to customized hardware accelerators.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, that’s it, another KubeCon in the books. I hope this retrospective gave you a taste of what the excitement is all about. And I sincerely hope that if you find this exciting, that you also may attend some day. I look forward to seeing you out there on the road, and as always, stay safe out there, and happy hacking =)&lt;/p&gt;

</description>
				<pubDate>Tue, 19 Nov 2024 00:00:00 +0000</pubDate>
				<link>https://notes.elmiko.dev/2024/11/19/kubecon-na-2024-retrospective.html</link>
				<guid isPermaLink="true">https://notes.elmiko.dev/2024/11/19/kubecon-na-2024-retrospective.html</guid>
			</item>
		
			<item>
				<title>Developing the Karpenter Cluster API Provider</title>
				<description>&lt;p&gt;Since October 2023 I’ve been working with the &lt;a href=&quot;https://cluster-api.sigs.k8s.io&quot;&gt;Kubernetes Cluster API&lt;/a&gt; community
to develop a native &lt;a href=&quot;https://karpenter.sh&quot;&gt;Karpenter&lt;/a&gt; provider so that we can explore the behavior
of these projects together. Karpenter is an exciting node auto-provisioner that has features
for configurable cluster consolidation and deep cloud inventory awareness, and Cluster API is
a declarative infrastructure API for Kubernetes with coverage on nearly 2 dozen providers.
If these projects can work well together, it would give the community an excellent way to run
Karpenter on many cloud providers. As of last week, we have reached the minimally
viable implementation for a proof-of-concept, and we are in the
&lt;a href=&quot;https://github.com/kubernetes/org/issues/5097&quot;&gt;process of donating the repository&lt;/a&gt; to the Kubernetes SIGs community for wider
experimentation and development.&lt;/p&gt;

&lt;p&gt;Although we have reached a nice milestone for the project, there is still much work to do
as we attempt to reach feature parity with the native Karpenter implementations. We will also
need to learn how best to integrate Karpenter with Cluster API, where the bottlenecks might be,
and how we can create the best user experience possible.&lt;/p&gt;

&lt;p&gt;With the first phase of work complete, it’s a nice time to reflect on what we have done and
how we got here. This is especially relevant given that the developer community around Karpenter
is still growing and there are many people interested in implementing providers for other
platforms. In this post, I am going to share the process we used to develop the Cluster API
provider in hopes that it will help others who are making the same journey.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/karpcapi-logo.svg&quot; class=&quot;img-responsive center-block&quot; alt=&quot;Karpenter Cluster API logo&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;planning-the-foundation&quot;&gt;Planning the foundation&lt;/h2&gt;

&lt;p&gt;To help us solve problems of architectural constraints, project goals, and user experience options,
we started a &lt;a href=&quot;https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/community/20231018-karpenter-integration.md&quot;&gt;Karpenter feature group&lt;/a&gt; in the Cluster API community where we had a regular time
and space to have and record our discussions and decisions. It took us a few months to arrive at a
design and initial plan for how we would create the proof-of-concept Karpenter. This process was hugely beneficial
as it gave us ample opportunity to talk through the software constraints that the Cluster API provider
placed on this implementation. In specific, the asynchronous nature of infrastructure creation in
Cluster API is slightly at odds with the synchronous nature of Karpenter’s cloud interface.&lt;/p&gt;

&lt;p&gt;One of the challenges in writing a Karpenter provider for Cluster API is the limitation of needing
to use the Kubernetes API for making changes to the infrastructure. This means that any time we
want to learn about the inventory or status of the infrastructure resources, we need to query the
Kubernetes API and potentially exercise reconciliation loops to ensure that asynchronous behavior
is captured accurately. This is in stark contrast to infrastructure providers where there is
direct access to a metadata service with synchronous responses.&lt;/p&gt;

&lt;p&gt;In addition to the engineering concerns around designing an integration between Cluster API and
Karpenter, there is also a necessary focus on how to expose the API features of both projects. Cluster API
and Karpenter are both provisioning tools built on top of Kubernetes, and this means that they
have some overlap in the features they expose. A point of discussion that we spent several meetings
exploring was the idea of where on the spectrum of “Cluster API to Karpenter” does the community want the
user experience. Given the nature of Cluster API, I feel this specific design concern will most
likely not be an issue for other provider implementers, unless those providers have a user community
that expects a deep interaction with the platform APIs.&lt;/p&gt;

&lt;h2 id=&quot;initial-code-and-trajectory&quot;&gt;Initial code and trajectory&lt;/h2&gt;

&lt;p&gt;With the plans solidified and the community in consensus about our initial direction, I started to
build a skeleton repository for the project. I did this by copying the &lt;a href=&quot;https://github.com/kubernetes-sigs/karpenter/tree/main/kwok&quot;&gt;Kwok provider&lt;/a&gt; from
the &lt;a href=&quot;https://github.com/kubernetes-sigs/karpenter&quot;&gt;Karpenter repository&lt;/a&gt; into a new repository, and then building a simple Makefile and
the necessary Go files to build the project. At this point I had a basic buildable project and could
begin the next step of defining the boundaries for the code changes that would be needed.&lt;/p&gt;

&lt;p&gt;The Karpenter developers have created a straightforward interface that providers must implement. In
general it defines functions that provide information about the infrastructure inventory and resources,
and functions to manage the nodes of the cluster and the resources. It is small enough to reproduce
here:&lt;/p&gt;

&lt;div class=&quot;language-go highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;// Create launches a NodeClaim with the given resource requests and requirements and returns a hydrated&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// NodeClaim back with resolved NodeClaim labels for the launched NodeClaim&lt;/span&gt;
	&lt;span class=&quot;n&quot;&gt;Create&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;context&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Context&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;v1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;NodeClaim&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;v1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;NodeClaim&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;error&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// Delete removes a NodeClaim from the cloudprovider by its provider id&lt;/span&gt;
	&lt;span class=&quot;n&quot;&gt;Delete&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;context&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Context&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;v1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;NodeClaim&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;error&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// Get retrieves a NodeClaim from the cloudprovider by its provider id&lt;/span&gt;
	&lt;span class=&quot;n&quot;&gt;Get&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;context&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Context&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;string&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;v1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;NodeClaim&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;error&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// List retrieves all NodeClaims from the cloudprovider&lt;/span&gt;
	&lt;span class=&quot;n&quot;&gt;List&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;context&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Context&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;([]&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;v1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;NodeClaim&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;error&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// GetInstanceTypes returns instance types supported by the cloudprovider.&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// Availability of types or zone may vary by nodepool or over time.  Regardless of&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// availability, the GetInstanceTypes method should always return all instance types,&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// even those with no offerings available.&lt;/span&gt;
	&lt;span class=&quot;n&quot;&gt;GetInstanceTypes&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;context&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Context&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;v1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;NodePool&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;([]&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;InstanceType&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;error&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// DisruptionReasons is for CloudProviders to hook into the Disruption Controller.&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// Reasons will show up as StatusConditions on the NodeClaim.&lt;/span&gt;
	&lt;span class=&quot;n&quot;&gt;DisruptionReasons&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[]&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;v1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;DisruptionReason&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// IsDrifted returns whether a NodeClaim has drifted from the provisioning requirements&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// it is tied to.&lt;/span&gt;
	&lt;span class=&quot;n&quot;&gt;IsDrifted&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;context&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Context&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;v1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;NodeClaim&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;DriftReason&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;error&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// Name returns the CloudProvider implementation name.&lt;/span&gt;
	&lt;span class=&quot;n&quot;&gt;Name&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;string&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// GetSupportedNodeClasses returns CloudProvider NodeClass that implements status.Object&lt;/span&gt;
	&lt;span class=&quot;c&quot;&gt;// NOTE: It returns a list where the first element should be the default NodeClass&lt;/span&gt;
	&lt;span class=&quot;n&quot;&gt;GetSupportedNodeClasses&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[]&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;status&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Object&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The first thing I did after getting the project building was to stub out these functions and have them
all return the equivalent of a zero response or an error. In this way, anyone looking at the code
could easily seen what was implemented and what was not. I also added a README file to the project with
a checklist showing which interfaces were implemented. I then set off on the task of implementing the
individual functions.&lt;/p&gt;

&lt;p&gt;In the case of Cluster API, one of the things I spent a significant amount of time on was figuring out
how to translate capacity, geographic, and pricing data from the instance types to the Karpenter API types.
This might not be as big a problem on platforms with direct access to a metadata service, but given the
abstracted nature of Cluster API it posed a challenge in writing some of the functions. I am sure this will
be a point of investigation and perhaps expansion in the Cluster API project as we learn more about how
to expose deep infrastructure metadata.&lt;/p&gt;

&lt;p&gt;One of the biggest challenges for Cluster API is the implementation of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Create&lt;/code&gt; method. In Cluster API
the creation of new Machines (the API type associated with a Node’s instance) is controlled by a scale
subresource on another API type. This relationship is similar to that of a Pod to a ReplicationController or
Deployment in that an increase in a replica count will trigger the creation of new resources. This means
that we must increase the replica count, and then wait for the Machine resource to be created before we
know the identifying information about the infrastructure resource. Karpenter, in contrast, would like to
know that identifying information when &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Create&lt;/code&gt; returns, and this causes the synchronicity issue between the two.&lt;/p&gt;

&lt;p&gt;When learning about how to implement the interface functions and what Karpenter expected from them, I spent
much time exploring the &lt;a href=&quot;https://github.com/kubernetes-sigs/karpenter&quot;&gt;core Karpenter repository&lt;/a&gt;, but I also studied the way that the
&lt;a href=&quot;https://github.com/aws/karpenter-provider-aws/blob/main/pkg/cloudprovider/cloudprovider.go&quot;&gt;AWS provider&lt;/a&gt; and &lt;a href=&quot;https://github.com/Azure/karpenter-provider-azure/blob/main/pkg/cloudprovider/cloudprovider.go&quot;&gt;Azure provider&lt;/a&gt; implemented their cloud provider interface. This study
helped me immensely to understand what values would be absolutely required and which might be more supplemental.
I also used the code of the &lt;a href=&quot;https://github.com/kubernetes-sigs/karpenter/tree/main/kwok&quot;&gt;Kwok provider&lt;/a&gt; as an example, especially early on in development, but as
I got further into testing and debugging I found the AWS and Azure providers to be more useful examples.&lt;/p&gt;

&lt;p&gt;Another very helpful resource was the Karpenter community itself. There are several ways to contact
and participate with the &lt;a href=&quot;https://karpenter.sh/docs/contributing/&quot;&gt;Karpenter working group&lt;/a&gt;, and I highly recommend reaching out if you
have questions. I was able to connect with several wonderful and helpful people in the Karpenter community
by asking questions on their Kubernetes Slack channel (#karpenter-dev) and by attending their meetings with
questions and announcements on their agenda.&lt;/p&gt;

&lt;h3 id=&quot;a-note-on-code-structure&quot;&gt;A note on code structure&lt;/h3&gt;

&lt;p&gt;Something that I found helpful when sketching out the initial structure for the code of the project was
considering how the AWS and Azure providers were constructed. Looking at their designs from the cloud provider
interface down, it was quick to see that they were both following a similar pattern. In specific the
abstraction between the cloud provider interface functions and the various resource and data provider
functions is worthy of note for designing new implementations.&lt;/p&gt;

&lt;h2 id=&quot;handling-custom-resources&quot;&gt;Handling custom resources&lt;/h2&gt;

&lt;p&gt;Karpenter requires a few custom resource definitions in order to operate. Notably the NodePool and NodeClaim
definitions, but it will also need some sort of NodeClass implementation for the provider. I found the
pattern for inclusion to be straightforward in the AWS and Azure projects, namely including the YAML manifests
in a subdirectory of the API code. Following this pattern seemed like the easiest way to keep consistency
for developers who might be inspecting the Cluster API provider in the future. To make it even easier, I added
some scripting to the makefile to make the generation of those manifests a little easier by rendering them
directly from the vendored dependencies.&lt;/p&gt;

&lt;p&gt;Checking the custom resource definition manifests in to the code repository makes automating the testing
much easier and also including things like helm charts and other examples. In addition
to creating the core Karpenter manifests for NodePool and NodeClaim, the repository will also need to contain
any platform specific manifests, such as the NodeClass implementation.&lt;/p&gt;

&lt;p&gt;A design question that came up during development was around the lifecycle of Karpenter’s NodePool and NodeClaim
resources, and how they could be used to carry provider specific information. To be clear, both the NodePool and NodeClaim objects
are created and reconciled types within the Kubernetes API, and as such you can build other controllers to
interact with them. Both the NodePool and NodeClaim are also free to carry provider specific information in the
form of annotations and labels, as you might expect from any other API object. In my experience, I did not
have to implement any specific functions for controlling the lifecycle of the Karpenter resources, they are
all handled by the core Karpenter controllers.&lt;/p&gt;

&lt;h2 id=&quot;testing&quot;&gt;Testing&lt;/h2&gt;

&lt;p&gt;For Cluster API, testing gave me tremendous confidence in the code that we were building. A big advantage
for Cluster API in this area is that we could use &lt;a href=&quot;https://book.kubebuilder.io/reference/envtest.html&quot;&gt;kubebuilder’s envtest&lt;/a&gt; package to great effect
since most of the platform interactions would happen through Kubernetes resources.&lt;/p&gt;

&lt;p&gt;Using examples from kubebuilder and some other projects, I was able to quickly configure a test suite that
could exercise all of the cloud provider interface functions. This ultimately became a foundation point for
the project as it gave me great confidence when bringing the code out of testing and on to a live cluster
environment.&lt;/p&gt;

&lt;p&gt;Testing should be a core of whatever provider is being built, but depending on the provider, and perhaps the
maturity of tooling, it may be more difficult to mock out the infrastructure specific parts of instance
management. This is something to consider and plan for when building a new provider implementation.&lt;/p&gt;

&lt;h2 id=&quot;proof-of-concept&quot;&gt;Proof of concept&lt;/h2&gt;

&lt;p&gt;After nearly 3 months of design and coding, we achieved the initial proof-of-concept version of the Karpenter
Cluster API provider. There were a few minor bumps to smooth out as integration testing began, but thanks to
the unit testing the fixes were quick to implement and before long I was able to demonstrate the application
in action.&lt;/p&gt;

&lt;iframe class=&quot;center-block&quot; width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/BZz5ibGP7ZQ?si=DlwNO2O8-nuGNti7&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;

&lt;h2 id=&quot;future-plans&quot;&gt;Future plans&lt;/h2&gt;

&lt;p&gt;This is just the beginning. The most basic functionality is working but there are still several open questions:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;will we need a more reactive, asynchronous, design for Machine creation?&lt;/li&gt;
  &lt;li&gt;how will price data be exposed on Cluster API objects?&lt;/li&gt;
  &lt;li&gt;can we implement the opt-in scale from zero capacity information for each provider?&lt;/li&gt;
  &lt;li&gt;is it possible to use efficient native interfaces like EC2Fleet and VMSS?&lt;/li&gt;
  &lt;li&gt;how will kubelet and user data configuration changes be handled?&lt;/li&gt;
  &lt;li&gt;can drift be implemented?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And those are just a few of the top issues, there will be plenty more as we start to explore the
behavior of this integration. I am quite happy and curious to see how things go from here and what the
community would like to see from this project.&lt;/p&gt;

&lt;p&gt;If you would like to get involved, please see the &lt;a href=&quot;https://cluster-api.sigs.k8s.io/introduction.html?highlight=meeting#-community-discussion-contribution-and-support&quot;&gt;Cluster API community book&lt;/a&gt; for more
information on how to contact people and where to follow. Also, please follow the
&lt;a href=&quot;https://github.com/kubernetes/org/issues/5097&quot;&gt;migration pull request for the repository&lt;/a&gt; to see when the
repository will be adopted into the Kubernetes SIGs organization. Assuming the migration goes through, we
will have the common Kubernetes contribution process setup there with plenty of issues to share.&lt;/p&gt;

&lt;p&gt;Although there are many members of the Cluster API and Karpenter communities who participated in the
design and development of this project, I would like to give a special mention to GitHub user @daimaxiaxie who
contributed some very timely patches and collaborated on some tricky code issues. Thank you, I am grateful
for your help!&lt;/p&gt;

&lt;p&gt;Hopefully this has been helpful for those of you who might be building your own Karpenter provider, or who are just
interested in open source software development. The biggest takeaway from this experience that I can share is
that reading the other providers’ code as examples helped me tremendously, it took time to study their sources but I
feel it made my understanding of Karpenter better and gave me more confidence about what we were building.
Have fun out there and as always, happy hacking =)&lt;/p&gt;

</description>
				<pubDate>Sun, 18 Aug 2024 00:00:00 +0000</pubDate>
				<link>https://notes.elmiko.dev/2024/08/18/developing-the-karpenter-cluster-api-provider.html</link>
				<guid isPermaLink="true">https://notes.elmiko.dev/2024/08/18/developing-the-karpenter-cluster-api-provider.html</guid>
			</item>
		
			<item>
				<title>Comparing the Kubernetes Cluster Autoscaler and Karpenter</title>
				<description>&lt;p&gt;Over the last couple years, the &lt;a href=&quot;https://karpenter.sh&quot;&gt;Karpenter project&lt;/a&gt; has been gaining
popularity and momentum in the Kubernetes community. It is often spoken about
in the same breath as the &lt;a href=&quot;https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler&quot;&gt;Cluster Autoscaler&lt;/a&gt;(CAS), and is commonly
viewed as a node autoscaler. There are nuanced differences between the two
projects that make this equivalence slightly inaccurate and I’d like to
explore and highlight those differences.&lt;/p&gt;

&lt;h2 id=&quot;a-little-background-for-context&quot;&gt;A little background for context&lt;/h2&gt;

&lt;p&gt;To start with though, why am I even talking about this?&lt;/p&gt;

&lt;p&gt;As part of my &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;$JOB&lt;/code&gt; at Red Hat I spend a great deal of time working on the
CAS; mostly doing maintenance and ensuring that the &lt;a href=&quot;https://cluster-api.sigs.k8s.io&quot;&gt;Cluster API&lt;/a&gt;
provider has as few bugs and as many features as possible. I am also working
with the Cluster API communuity’s &lt;a href=&quot;https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/community/20231018-karpenter-integration.md&quot;&gt;Karpenter feature group&lt;/a&gt; to understand
how we can integrate these projects while preserving the core features from
both. On top of that, I have been interested in distributed systems and
cloud infrastructures for many years and find the topic of autoscaling to be
fascinating.&lt;/p&gt;

&lt;p&gt;I get asked a lot of questions about CAS and Karpenter and I thought it would
be worthwhile to write something a little more durable and public to help
share my perspectives and opinions. That said, what is written here is my
opinion based on reading documentation and source code, and by operating
the projects. I highly recommend reviewing the
&lt;a href=&quot;https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md&quot;&gt;Kubernetes Cluster Autoscaler FAQ&lt;/a&gt; and
&lt;a href=&quot;https://karpenter.sh/docs/concepts/&quot;&gt;Karpenter Concepts&lt;/a&gt; documentation pages as many details can
be gleaned from these sources. I will also note that my bias is solidly rooted
in the maintenance of the Cluster API provider for CAS and building a Cluster
API integration for Karpenter, &lt;em&gt;caveat emptor&lt;/em&gt;.&lt;/p&gt;

&lt;h2 id=&quot;whats-in-a-name&quot;&gt;What’s in a name?&lt;/h2&gt;

&lt;p&gt;To start with, let’s look at the names of the projects: Cluster Autoscaler,
and Karpenter. A Cluster Autoscaler is something that will automatically scale
your cluster. This clearly implies that I have a cluster and somehow the nodes
in it will be scaled. Meaning that I can have more, or fewer, of the nodes that
I already have based on some sort of calculation. Fairly straightforward.&lt;/p&gt;

&lt;p&gt;On the other hand, a Karpenter (carpenter) is someone who builds things out
of wood. I interpret this to mean that the project implies it will be building
things, in this case Kubernetes nodes. So, instead of viewing Karpenter as
an application that will scale the existing nodes in my cluster, it might be
more accurate to view it as a node builder that can provision new nodes in
the cluster based on what the workloads of the cluster need.&lt;/p&gt;

&lt;h2 id=&quot;scaling-and-provisioning&quot;&gt;Scaling and provisioning&lt;/h2&gt;

&lt;p&gt;What does it mean that the CAS will scale things and Karpenter will provision
things?&lt;/p&gt;

&lt;p&gt;When configuring the CAS for use, one thing the user must do is to configure
the node groups that will be available for CAS to manipulate. The
configuration is highly provider specific and is required for the CAS to
understand what types of nodes it can scale. For example, with the Cluster API
provider this process involves the user adding specific annotations to their
scalable resources (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;MachinePool&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;MachineDeployment&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;MachineSet&lt;/code&gt;) to instruct
the CAS about scaling inclusion and limits.&lt;/p&gt;

&lt;p&gt;By contrast, when configuring Karpenter the user must specify what constraints
will apply when determining how pods will fit onto specific instance types
and categories. This means that Karpenter can understand what types of
instances will become nodes that a specific pod can be scheduled to, and then
it can check the infrastructure inventory to determine if an instance can be
created to contain the pod. An example of this can be seen in how the
&lt;a href=&quot;https://karpenter.sh/v0.33/faq/#how-does-karpenter-dynamically-select-instance-types&quot;&gt;Karpenter AWS provider can use EC2 Fleet&lt;/a&gt; to find many
instances which might fit the pod (or group of pods) and then choose the best
option based on user preferences.&lt;/p&gt;

&lt;p&gt;Allowing the application to understand provisioning instead of scaling also
lends itself to more dynamic discovery of instance types and categories at run
time. This means that as cloud inventory changes, Karpenter can react to those
changes automatically and make moment-to-moment decisions about market
availability and pricing. By contrast, to emulate this behavior using the CAS
with Cluster API would require the user to be updating the
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;InfrastructureMachineTemplate&lt;/code&gt; resources in their cluster as well as updating
the associated scalable types on a continual basis.&lt;/p&gt;

&lt;h2 id=&quot;when-to-activate-pending-or-unscheduled&quot;&gt;When to activate, pending or unscheduled?&lt;/h2&gt;

&lt;p&gt;Another subtle point of difference that flows from the notion of scaling
versus provisioning is the conditions under which these applications will jump
into action.&lt;/p&gt;

&lt;p&gt;When there are pending pods in a cluster, this means that there
is a node available which could receive the pod but that it currently does not
have allocatable capacity. In these cases, when configured, the CAS and
Karpenter both have the ability to  add more nodes of the type that will accept
the workload.&lt;/p&gt;

&lt;p&gt;When there are unschedulable pods in a cluster, this means that there are no
nodes that can satisfy the requirements of the pod. In these cases the CAS
will only act if it has a node group which could possibly make a node to
schedule that pod. This scenario can happen when the CAS is configured to
have node groups of size zero and thus there are no nodes in the cluster which
could schedule the pod, but the CAS knows how to make that type of node.&lt;/p&gt;

&lt;p&gt;Karpenter, by contrast, is configured by the user to instruct the provisioning
of nodes based on pod constraints. In the case of an unschedulable pod,
Karpenter will refer to its provisioners, and will then request several
instance types which could satisfy the pod requirements. Karpenter can then
choose the best (by cost, resource, availability, etc) instance type to create
as the pod is being requested.&lt;/p&gt;

&lt;p&gt;This is not to say that the CAS won’t also look at several instance types when
deciding which to make, but we need to look at the details a little closer to
understand that choice. In Karpenter, having the ability to calculate instance
types based on the workload constraints means that it can request a broad
range of instances (limited by user configuration) from the infrastructure. By
contrast, CAS can be configured with many instance types and when presented
with a decision about which type of instance to request for a specific workload
it will use the user’s preference (up to and including &lt;a href=&quot;https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/expander/grpcplugin&quot;&gt;custom code&lt;/a&gt;)
to make that choice. The CAS is limited by its node group configurations,
which are different on each platform, and may or may not support dynamic
instance discovery and full resource and cost data.&lt;/p&gt;

&lt;p&gt;In both cases, CAS and Karpenter, the user has control over the options for
how nodes are chosen, with preferences and priorities across several ranges
such as pricing and resource consumption.&lt;/p&gt;

&lt;h2 id=&quot;disrupting-behavior-and-bin-packing&quot;&gt;Disrupting behavior and bin packing&lt;/h2&gt;

&lt;p&gt;Another topic that comes up frequently when talking about Karpenter is
&lt;a href=&quot;https://en.wikipedia.org/wiki/Bin_packing_problem&quot;&gt;bin packing&lt;/a&gt;. Bin packing in this context refers to the
algorithms that a program uses to fit items into a set of groups (bins)
based on arbitrary constraints. Both CAS and Karpenter use bin packing when
calculating how pods could fit into the possible node choices available. What
people are most frequently referring to when talking about bin packing and
Karpenter is its &lt;a href=&quot;https://karpenter.sh/docs/concepts/disruption/&quot;&gt;consolidation and disruption features&lt;/a&gt;, which the
CAS does not replicate.&lt;/p&gt;

&lt;p&gt;A frequent problem with distributed systems that have workloads which come and
go over time, is that the cluster can become sparsely populated. As workloads
are removed and not replaced they leave &lt;em&gt;holes&lt;/em&gt; in the nodes. This in turn
causes nodes within a cluster to become underutilized. In
these situations an activity is required to rebalance the workloads and
resources in the cluster to optimize usage. CAS and Karpenter both have
features to help address this problem, with Karpenter providing a more active
approach.&lt;/p&gt;

&lt;p&gt;When using the CAS, users have the ability to configure node resource
utilization thresholds and inactivity timers. Resource usage is calculated
based on summed pod resource requests compared to node allocatable capacities.
When utilization falls below the threshold, a node is cordoned and drained, to
allow for graceful termination before being removed. The inactivity timer
provides the mechanism for dictating how long underutilized nodes should
persist in the cluster. CAS is not doing any explicit rebalancing of workloads
during this scale down, it is only removing underutilized nodes. Any pods
disrupted during a node removal will be rescheduled by Kubernetes.&lt;/p&gt;

&lt;p&gt;Karpenter provides users with more options than CAS for defining how the cluster
will remove and replace nodes, it refers to these events as consolidation and
disruption. In the example of sparse workloads from above, Karpenter can
consolidate the cluster on a user defined schedule, preferring to choose instances that
are cheaper or more resource efficient. This consolidation activity will repack
pods within the cluster (using the Kubernetes scheduler) to replace inefficient
node configurations. Karpenter also allows users to replace
nodes based on cluster configuration skew, age of node in the cluster, and manual
intervention. Aside from manual intervention, the CAS does not provide an interface
for configuring arbitrary node replacement based on user defined conditions.
In all cases, when Karpenter removes nodes it follows an orderly eviction process
to allow for graceful node termination.&lt;/p&gt;

&lt;h2 id=&quot;community-and-sig-engagement&quot;&gt;Community and SIG engagement&lt;/h2&gt;

&lt;p&gt;You might have heard around the Kubernetes community that Karpenter has joined
the SIG Autoscaling community. This true!&lt;/p&gt;

&lt;p&gt;As of last December, the &lt;a href=&quot;https://github.com/kubernetes-sigs/karpenter&quot;&gt;Karpenter project core&lt;/a&gt; has been donated to
the Kubernetes Autoscaling SIG for maintenance and contribution. This package
is meant to be used as a library for providers to implement on their platforms.
Currently there are &lt;a href=&quot;https://github.com/aws/karpenter-provider-aws&quot;&gt;AWS&lt;/a&gt; and &lt;a href=&quot;https://github.com/Azure/karpenter&quot;&gt;Azure&lt;/a&gt; implementations.
Hopefully in the future we will have a Cluster API version as well ;)&lt;/p&gt;

&lt;p&gt;I think it’s important to highlight the provider implementation details in a
little more detail. I am optimistic about building a generic Cluster API
provider that would unlock Karpenter on all the Cluster API platforms, but I
also acknowledge that this might not provide the best experience and there will
be challenges to implementation. Karpenter would like to act as a provisioner
in the cluster, but Cluster API also wants to perform this role. To preserve the Cluster
API experience for users, the Karpenter and Cluster API controllers will need
to cooperate on the provisioning front. Making this interface generic might mean
losing access to some of the powerful cloud interfaces, like EC2 Fleet, which
help to make Karpenter powerful by extension. I’m sure there will be solutions
to these problems, but it’s a point of concern that I think about frequently.&lt;/p&gt;

&lt;p&gt;In addition to code, the joining of the communities has lead to an advancement
of defining common APIs around cluster lifecycle in relation to node
provisioning and removal. See the
&lt;a href=&quot;https://docs.google.com/document/d/1rHhltfLV5V1kcnKr_mKRKDC4ZFPYGP4Tde2Zy-LE72w/edit&quot;&gt;Cluster Autoscaler/Karpenter API Alignment AEP draft&lt;/a&gt; for more
information.&lt;/p&gt;

&lt;h2 id=&quot;is-one-better-than-the-other&quot;&gt;Is one “better” than the other?&lt;/h2&gt;

&lt;p&gt;This is a question that I don’t think is quite right. Both CAS and Karpenter
are tremendous software applications with many satisfied users. They are
different in approach and features, and I don’t think it’s fair to proclaim
that one is greater or lesser than the other. I think it’s more appropriate to
ask, &lt;em&gt;which one is better for your use case?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There is tremendous overlap between the two applications but they do have
different, opinionated, approaches to solving the problem of having
just-in-time resources in Kubernetes. In many respects, Karpenter can be
configured in a manner to perform the same task as CAS, and that cannot be
said in the converse. In this respect Karpenter’s features might be seen as
a superset CAS’s features.&lt;/p&gt;

&lt;p&gt;On the other hand, in clusters where swings in cluster size might be
lower and choice of instance type is not required to be as dynamic, operating
the CAS could prove to be a less complex task. Especially if you have been
using the CAS for years already.&lt;/p&gt;

&lt;p&gt;Additionally, CAS currently runs on 28 providers listed in the repository, with
at least 2 of those (Cluster API and OpenStack) being platforms that run on a
multitude of other platforms. Karpenter currently only supports AWS and Azure.&lt;/p&gt;

&lt;h2 id=&quot;caution-strong-opinions-ahead&quot;&gt;Caution, strong opinions ahead&lt;/h2&gt;

&lt;p&gt;Karpenter seems ideal for situations where you want to manage larger
heterogeneous clusters that have high amounts of workload churn. Where resource
maximization and cost reduction are primary drivers to configuring and
optimizing the cluster. In these scenarios, Karpenter’s ability to consolidate
and choose from a wide variety of instance types will be very beneficial.&lt;/p&gt;

&lt;p&gt;The CAS seems well suited in cases where your cluster has a very reliable rate
of growth, and the instance types are more homogeneous across the cluster. This
applies well in smaller clusters, especially if the need for scaling is only a
single node or two over a short period of time (e.g. bursting for an evening).
It also applies well in clusters where the instances are treated as a large pool
and the specific types are less important, or in situations where having some
overprovisioned capacity is preferred.&lt;/p&gt;

&lt;p&gt;I find Karpenter’s configuration options to be more complicated than the CAS,
and I’m not sure how hard it is to debug issues with Karpenter as I have much
more time on the CAS. I really like that Karpenter has taken the approach to
use API resources (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NodePool&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NodeClaim&lt;/code&gt;, etc.) to drive its behavior as I
think it makes it easier to reckon about functionality and exposes more context
to users.&lt;/p&gt;

&lt;p&gt;Given the similarities and differences, I think it’s really difficult to make
an “apples to apples” comparison between the two. Karpenter really seems to
be positioned as an application that can open the doors on an infrastructure
with a large dynamic inventory. I suppose I am not surprised that this
technology originated at AWS since they specialize in having just such an
inventory. I have a feeling the differences with CAS would be much smaller if Karpenter
were implemented on a platform with a smaller or less dynamic inventory. The
consolidation and disruption features are very nice, and big features for
Karpenter, but I believe some of these activities could be replicated by
well crafted CAS and Kubernetes configurations, use of projects like the
&lt;a href=&quot;https://github.com/kubernetes-sigs/descheduler&quot;&gt;descheduler&lt;/a&gt;, and custom automation. At this point though, these are
just my theories.&lt;/p&gt;

&lt;p&gt;I hope my impressions, interpretations, and opinions expressed here have helped
you to figure out what is most beneficial for your needs. If you’ve
made it this far, I really appreciate it, and as always happy hacking o/&lt;/p&gt;

</description>
				<pubDate>Sun, 21 Jan 2024 00:00:00 +0000</pubDate>
				<link>https://notes.elmiko.dev/2024/01/21/comparing-cas-and-karpenter.html</link>
				<guid isPermaLink="true">https://notes.elmiko.dev/2024/01/21/comparing-cas-and-karpenter.html</guid>
			</item>
		
			<item>
				<title>From Dodging to Shooting in Godot</title>
				<description>&lt;p&gt;As a child video games fascinated me, and that inspiration was a large
part of what drove me to learn about computers and programming. Over the
years I’ve spent many wonderful hours playing, designing, and implementing
games. I’ve even managed to get paid for it a few times in my life. Making
games is a hobby that I love to dabble with when the time, and for those
times I’ve found the &lt;a href=&quot;https://godotengine.org&quot;&gt;Godot game engine&lt;/a&gt; to be a powerful and license
friendly toolkit to use.&lt;/p&gt;

&lt;p&gt;Learning Godot can be frustrating at times, but it can also be very relaxing
to explore the IDE that the community has created. I’ve done several
tutorials and have even created a few simple games
(&lt;a href=&quot;https://wmd.opbstudios.com&quot;&gt;Warfare, Magic, Divinity&lt;/a&gt;, and &lt;a href=&quot;https://mom.opbstudios.com&quot;&gt;Maps of Mnemos&lt;/a&gt;), but recently
I’ve wanted to learn more about shooting style games. I thought that converting
the “Dodge the Creeps” game from the basic tutorial into a “Shoot the Creeps” game
would make for a nice exercise and I could reuse lessons learned in the original.&lt;/p&gt;

&lt;p&gt;So, here is my take on “Shoot the Creeps”. I’ve made some inherent design choices
such as fixing the player at the bottom of the screen and shooting upward, likewise
my code choices might not be best idiomatic Godot but I’ve tried to follow the
patterns established in the original tutorial. &lt;a href=&quot;https://gitlab.com/elmiko/shoot-the-creeps&quot;&gt;Shoot the Creeps source&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/shoot-the-creeps.gif&quot; class=&quot;img-responsive center-block&quot; alt=&quot;shoot the creeps screenshot&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;a-note-on-language&quot;&gt;A note on language&lt;/h2&gt;

&lt;p&gt;I’ve chosen to use GDScript for all my examples, apologies ahead of time to folks
who aren’t using that language. Perhaps in the future I will get into the other
language options, but given the similarity between Python and GDScript, coupled with
&lt;a href=&quot;https://notes.elmiko.dev/2022/12/18/why-i-keep-python-in-the-tool-box.html&quot;&gt;my love of python&lt;/a&gt;, it has been the path of least resistance for me.&lt;/p&gt;

&lt;h2 id=&quot;step-1-do-the-tutorial&quot;&gt;Step 1, do the tutorial!&lt;/h2&gt;

&lt;p&gt;No, seriously, that’s what I did to start this exercise =)&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://docs.godotengine.org/en/stable/getting_started/first_2d_game/index.html&quot;&gt;Your first 2D game – Godot Engine (stable) documenation in English&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;move-the-player-to-the-bottom&quot;&gt;Move the player to the bottom&lt;/h2&gt;

&lt;p&gt;The first choice I have made for this shooter is that the player will be at
the bottom of the screen and only move left and right, akin to classics such
as &lt;a href=&quot;https://en.wikipedia.org/wiki/Space_Invaders&quot;&gt;Space Invaders&lt;/a&gt; and &lt;a href=&quot;https://en.wikipedia.org/wiki/Galaga&quot;&gt;Galaga&lt;/a&gt;. Because the game uses a node to
determine the starting position for the player, we can change that value directly.
In the node inspector, adjust the starting &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;y&lt;/code&gt; position for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Main/StartPosition&lt;/code&gt;
node to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;660&lt;/code&gt;, as follows:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/stc-ss2.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;set the player y position&quot; /&gt;&lt;/p&gt;

&lt;p&gt;To restrict the player movement, we need to update the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Player.gd&lt;/code&gt; script file, mainly
by removing the options for up and down movement and also fixing up the resting
animation frame (we want the player to look upward).&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gd&quot;&gt;--- a/Player.gd
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+++ b/Player.gd
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;@@ -19,10 +19,6 @@&lt;/span&gt; func _process(delta):
				velocity.x += 1
		if Input.is_action_pressed(&quot;move_left&quot;):
				velocity.x -= 1
&lt;span class=&quot;gd&quot;&gt;-       if Input.is_action_pressed(&quot;move_down&quot;):
-               velocity.y += 1
-       if Input.is_action_pressed(&quot;move_up&quot;):
-               velocity.y -= 1
&lt;/span&gt;
		if velocity.length() &amp;gt; 0:
				velocity = velocity.normalized() * speed
&lt;span class=&quot;p&quot;&gt;@@ -38,10 +34,8 @@&lt;/span&gt; func _process(delta):
				$AnimatedSprite2D.flip_v = false
				# See the note below about boolean assignment.
				$AnimatedSprite2D.flip_h = velocity.x &amp;lt; 0
&lt;span class=&quot;gd&quot;&gt;-       elif velocity.y != 0:
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+       else:
&lt;/span&gt;				$AnimatedSprite2D.animation = &quot;up&quot;
&lt;span class=&quot;gd&quot;&gt;-               $AnimatedSprite2D.flip_v = velocity.y &amp;gt; 0
-
&lt;/span&gt;
 func _on_body_entered(body):
		hide() # Player disappears after being hit.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;h2 id=&quot;make-mobs-fly-downwards&quot;&gt;Make mobs fly downwards&lt;/h2&gt;

&lt;p&gt;Another big design choice I’ve made is to make the mobs only fly downwards. To
accomplish this, we want to restrict where they spawn to only originate from the
top border, and also make them only face downward with the same directional
velocity.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Side note: I think there might be a better way to do this than to use RigidBody2D,
the main I reason I chose this is that the physics of the sprites can be affected
by other objects in the system. It seems like a good challenge for designing a
shooter would be to use non-physics based sprites.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Removing the extra spawn points is basically a reversal of the steps followed in
the &lt;a href=&quot;https://docs.godotengine.org/en/stable/getting_started/first_2d_game/05.the_main_game_scene.html#spawning-mobs&quot;&gt;Spawning mobs section of the original tutorial&lt;/a&gt;. By using the red
delete point tool on the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Main/MobPath&lt;/code&gt; node, we remove the last 3 points (top
left, bottom left, and bottom right). This restricts the spawn path to only
occur along the top border.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/stc-ss3.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;path tool&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Making the mobs face downwards and travel that direction is a short code change.
Whenever a new mob is spawned, we want to set the direction to a fixed point (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;PI / 2&lt;/code&gt;),
and then let the physics to the rest.&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gd&quot;&gt;--- a/Main.gd
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+++ b/Main.gd
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;@@ -31,14 +31,11 @@&lt;/span&gt; func _on_mob_timer_timeout():
		var mob_spawn_location = get_node(&quot;MobPath/MobSpawnLocation&quot;)
		mob_spawn_location.progress_ratio = randf()

-       # Set the mob&apos;s direction perpendicular to the path direction.
&lt;span class=&quot;gd&quot;&gt;-       var direction = mob_spawn_location.rotation + PI / 2
-
&lt;/span&gt;		# Set the mob&apos;s position to a random location.
		mob.position = mob_spawn_location.position

-       # Add some randomness to the direction
&lt;span class=&quot;gd&quot;&gt;-       direction += randf_range(-PI / 4, PI / 4)
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+       # Set the mob&apos;s rotation to face the bottom of the window
+       var direction = PI / 2
&lt;/span&gt;		mob.rotation = direction

		# Choose the velocity for the mob.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;create-a-bullet-scene&quot;&gt;Create a bullet scene&lt;/h2&gt;

&lt;p&gt;Following in the patterns from the tutorial, we will make a Bullet scene so that
we can efficiently spawn new bullets whenever the fire button is pressed. Bullet
behavior is relatively straightforward; they should appear at the same X coordinate
as the player, and then travel towards the top of the screen. They also need to
have a collision box for intersecting with mobs.&lt;/p&gt;

&lt;p&gt;Make a new scene similar to how the Player and Mob scenes were created.&lt;/p&gt;

&lt;h3 id=&quot;node-setup&quot;&gt;Node setup&lt;/h3&gt;

&lt;p&gt;Click Scene -&amp;gt; New Scene from the top menu of the IDE and add the following nodes:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.godotengine.org/en/stable/classes/class_area2d.html&quot;&gt;Area2D&lt;/a&gt; (named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Bullet&lt;/code&gt;)
    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;https://docs.godotengine.org/en/stable/classes/class_colorrect.html&quot;&gt;ColorRect&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://docs.godotengine.org/en/stable/classes/class_collisionshape2d.html&quot;&gt;CollisionShape2D&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://docs.godotengine.org/en/stable/classes/class_visibleonscreennotifier2d.html&quot;&gt;VisibleOnScreenNotifier2D&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Set the children so they can’t be selected, similar to the Player and Mob scenes.&lt;/p&gt;

&lt;p&gt;In the &lt;a href=&quot;https://docs.godotengine.org/en/stable/classes/class_area2d.html&quot;&gt;Area2D&lt;/a&gt; properties, under the &lt;a href=&quot;https://docs.godotengine.org/en/stable/classes/class_collisionobject2d.html#class-collisionobject2d&quot;&gt;CollisionObject2D&lt;/a&gt;
section, uncheck the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;1&lt;/code&gt; inside the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Layer&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Mask&lt;/code&gt; properties, and check the
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;2&lt;/code&gt; inside both. This will make it so that we can have bullets collide with mobs
but not with the player.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/stc-ss4.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;collisionobject2d properties&quot; /&gt;&lt;/p&gt;

&lt;p&gt;For the bullet shape, a simple square will suffice for now; a future task would be to
make a bullet sprite or animation instead. Set the &lt;a href=&quot;https://docs.godotengine.org/en/stable/classes/class_colorrect.html&quot;&gt;ColorRect&lt;/a&gt; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Color&lt;/code&gt; property
to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;00ffff&lt;/code&gt; (or any other preferable color). Then set the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;x&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;y&lt;/code&gt; properties to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;20&lt;/code&gt; under
the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Size&lt;/code&gt; sub-section of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Transform&lt;/code&gt; section. Lastly, set the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;x&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;y&lt;/code&gt; of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Position&lt;/code&gt;
property to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-10&lt;/code&gt;. This will make the bullet track from the middle of its area. The end
result should look like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/stc-ss5.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;colorrect properties&quot; /&gt;&lt;/p&gt;

&lt;p&gt;To properly detect when bullets collide with mobs we need to set the
&lt;a href=&quot;https://docs.godotengine.org/en/stable/classes/class_collisionshape2d.html&quot;&gt;CollisionShape2D&lt;/a&gt; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Shape&lt;/code&gt; property to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;RectangleShape2D&lt;/code&gt;,
and then expand it to the size of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ColorRect&lt;/code&gt;. When everything is put
together it should look like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/stc-ss6.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;bullet properties&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;bullet-script&quot;&gt;Bullet script&lt;/h3&gt;

&lt;p&gt;Add a script to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Bullet&lt;/code&gt; scene as follows.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;extends Area2D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Whenever a bullet is on the screen we want it to travel towards the top, and register
a hit if it collides with a mob. To begin we want to make the bullet travel.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;const velocity = 200.0

func _process(delta):
	position.y -= velocity * delta
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_process&lt;/code&gt; function (see the &lt;a href=&quot;https://docs.godotengine.org/en/stable/tutorials/scripting/idle_and_physics_processing.html#doc-idle-and-physics-processing&quot;&gt;Idle and Physics Processing docs&lt;/a&gt;) is called
with a frame rate dependent frequency. The code simply uses a constant velocity
multiplied by the time delta between invocations to move the bullet.&lt;/p&gt;

&lt;p&gt;To register when a bullet collides with a mob, we will have the bullet emit a signal, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bullet_hit&lt;/code&gt;,
with an argument of the mob object. This will allow the receiver to update the score and remove mobs.
We also &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;hide()&lt;/code&gt; the bullet once it has collided with another entity, before finally emitting
the signal.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;signal bullet_hit(mob)

func _on_body_entered(body):
	hide()
	bullet_hit.emit(body)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To make this work we must connect the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;body_entered(body: Node2D)&lt;/code&gt; signal from the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Area2D&lt;/code&gt; object of
the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Bullet&lt;/code&gt; scene to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_on_body_entered&lt;/code&gt; function.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/stc-ss13.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;bullet signals&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Lastly, we want to ensure that any bullet that reaches the boundary of the screen is culled.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;func _on_visible_on_screen_notifier_2d_screen_exited():
	queue_free()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This method needs to be connected to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;screen_exited&lt;/code&gt; signal of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;VisibleOnScreenNotifier2D&lt;/code&gt;
node of the bullet, similar to what is described in the &lt;a href=&quot;https://docs.godotengine.org/en/stable/getting_started/first_2d_game/04.creating_the_enemy.html#enemy-script&quot;&gt;“Enemy script” section of the main tutorial&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;create-a-fire-bullet-action&quot;&gt;Create a fire bullet action&lt;/h2&gt;

&lt;p&gt;We want the player to trigger bullet firing when they press the space bar. To do this
we create an event in a similar manner as the &lt;a href=&quot;https://docs.godotengine.org/en/stable/getting_started/first_2d_game/03.coding_the_player.html&quot;&gt;“Coding the player” section from the main tutorial&lt;/a&gt;.
Click on Project -&amp;gt; Project Settings to open the project settings window and then
click on the Input Map tab at the top. Add a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fire_bullet&lt;/code&gt; action, like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/stc-ss7.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;event settings&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;player-script&quot;&gt;Player script&lt;/h3&gt;

&lt;p&gt;Since the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Player&lt;/code&gt; object currently contains logic for handling pressed buttons, we will
add code to the GDScript file for the player to detect when the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fire_bullet&lt;/code&gt; action
is pressed.&lt;/p&gt;

&lt;p&gt;Update the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Player.gd&lt;/code&gt; file to look like this:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;signal fire

func _process(delta):
	var velocity = Vector2.ZERO # The player&apos;s movement vector.
	if Input.is_action_pressed(&quot;move_right&quot;):
		velocity.x += 1
	if Input.is_action_pressed(&quot;move_left&quot;):
		velocity.x -= 1

	if Input.is_action_pressed(&quot;fire_bullet&quot;):
		fire.emit()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This will emit the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fire&lt;/code&gt; signal from the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Player&lt;/code&gt; whenever the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fire_bullet&lt;/code&gt; action is
pressed.&lt;/p&gt;

&lt;h3 id=&quot;connect-the-fire-signal-to-the-main-logic&quot;&gt;Connect the fire signal to the main logic&lt;/h3&gt;

&lt;p&gt;In the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Main&lt;/code&gt; scene we will now connect the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fire&lt;/code&gt; signal from the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Player&lt;/code&gt; to a
function in the GDScript for the main game logic. Update the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Player&lt;/code&gt; child node of the
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Main&lt;/code&gt; scene to connect the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fire()&lt;/code&gt; signal to a function in the main GDScript named
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_on_player_fire()&lt;/code&gt;, like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/stc-ss8.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;player signal&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Create the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_on_player_fire()&lt;/code&gt; function with an empty body for now.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;func _on_player_fire():
    pass
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;create-bullets-and-collisions&quot;&gt;Create bullets and collisions&lt;/h2&gt;

&lt;p&gt;With the bullet scene and logic for firing in place, we will now add the last pieces
to create bullets on the screen and the detect when they hit mobs.&lt;/p&gt;

&lt;h3 id=&quot;main-script&quot;&gt;Main script&lt;/h3&gt;

&lt;p&gt;There are several changes which must be made to the main GDScript file. We will go
through them in small pieces.&lt;/p&gt;

&lt;p&gt;We will need the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Bullet&lt;/code&gt; scene imported so that we can spawn new bullets when the
player presses the fire action.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;@export var bullet_scene: PackedScene
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Another piece we will want before create bullets is a way to add a cooldown period
to the fire action so that the player does not create a stream of bullets (although you
could change this if you want a stream of bullets!). We need a variable to gate when
we are on cooldown, this variable will be used by our fire function and by a timer
to be added later.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;var bullet_cooldown = false
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next we add the fire function, this contains a lot of commands and will also rely on
the creation of a timer and a followup function. Update the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_on_player_fire()&lt;/code&gt;
function as follows:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;func _on_player_fire():
	if bullet_cooldown:
		return
	bullet_cooldown = true
	$BulletTimer.start()
	var bullet = bullet_scene.instantiate()
	bullet.position = $Player.position
	bullet.position.y -= 20
	bullet.bullet_hit.connect(_on_bullet_hit)
	add_child(bullet)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If the player is on a bullet cooldown then this function will return immediately.
If not on cooldown, it will set the cooldown to true, restart the bullet cooldown
timer, instantiate a new &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Bullet&lt;/code&gt; scene, set its position to the same as the player &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;x&lt;/code&gt;
with a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;y&lt;/code&gt; value slightly above the player, connect the new object’s
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bullet_hit&lt;/code&gt; signal (which we added previously in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Bullet&lt;/code&gt; scene) to a function
name &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_on_bullet_hit&lt;/code&gt; (which will be added next), and lastly we add the new object
as a child node to the main scene.&lt;/p&gt;

&lt;p&gt;When we created the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Bullet&lt;/code&gt; scene we described a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bullet_hit&lt;/code&gt; signal to emit when
the bullet collides with another object. In the previous step we created a connection
between the newly create bullet and a function named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_on_bullet_hit&lt;/code&gt;. We now add
that function with a single argument of the object that is hit.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;func _on_bullet_hit(mob):
	mob.hide()
	score += 1
	$HUD.update_score(score)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;When this function receives the signal, it first hides the mob from view, then adds
one to the score, and lastly updates the score.&lt;/p&gt;

&lt;p&gt;The last thing we need to do is create a function for the bullet cooldown timer that
will be created. It should simply reset the cooldown variable, as follows:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;func _on_bullet_timer_timeout():
	bullet_cooldown = false
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;bullet-cooldown-timer&quot;&gt;Bullet cooldown timer&lt;/h3&gt;

&lt;p&gt;Because the player object examines the fire event input at the same frequency as the
framerate, we need to restrict the number of bullets that are created. One way to do
that is to create a cooldown gate in the main fire logic, which we did in the previous
step. The final step to make that logic work is to add a &lt;a href=&quot;https://docs.godotengine.org/en/stable/classes/class_timer.html&quot;&gt;Timer&lt;/a&gt; that will gate
bullet creation.&lt;/p&gt;

&lt;p&gt;Add a &lt;a href=&quot;https://docs.godotengine.org/en/stable/classes/class_timer.html&quot;&gt;Timer&lt;/a&gt; child node named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;BulletTimer&lt;/code&gt; to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Main&lt;/code&gt; node. Set it’s &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Wait Time&lt;/code&gt;
property to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0.25&lt;/code&gt;, and check the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;One Shot&lt;/code&gt; checkbox to the on state.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/stc-ss9.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;bullettimer properties&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Now connect the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;timeout()&lt;/code&gt; signal from the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;BulletTimer&lt;/code&gt; to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_on_bullet_timer_timeout()&lt;/code&gt;
function in the main GDScript.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/stc-ss10.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;bullettimer signals&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;update-mob-collision-mask&quot;&gt;Update Mob collision mask&lt;/h3&gt;

&lt;p&gt;To finalize the collision mechanics between the bullet scene and the mob scene, we want
to put the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Mob&lt;/code&gt; scene into the layer &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;2&lt;/code&gt; collision mask.&lt;/p&gt;

&lt;p&gt;In the &lt;a href=&quot;https://docs.godotengine.org/en/stable/classes/class_rigidbody2d.html&quot;&gt;RigidBody2D&lt;/a&gt; properties, under the &lt;a href=&quot;https://docs.godotengine.org/en/stable/classes/class_collisionobject2d.html#class-collisionobject2d&quot;&gt;CollisionObject2D&lt;/a&gt;
section, and check the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;2&lt;/code&gt; inside the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Layer&lt;/code&gt; property. This will make it so that mobs will mask
the same layer as bullets when processing collisions.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/stc-ss12.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;mob collision&quot; /&gt;&lt;/p&gt;

&lt;p&gt;We keep the mob in layer &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;1&lt;/code&gt; as well so that collision with the player will cause the
game to end.&lt;/p&gt;

&lt;h2 id=&quot;final-clean-up&quot;&gt;Final clean up&lt;/h2&gt;

&lt;p&gt;Let’s go back and clean up a few details. To start we can remove the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ScoreTimer&lt;/code&gt; node as
it will no longer be needed.&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gd&quot;&gt;--- a/Main.gd
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+++ b/Main.gd
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;@@ -9,7 +9,6 @@&lt;/span&gt; var bullet_cooldown = false
 func game_over():
 	$Music.stop()
 	$DeathSound.play()
&lt;span class=&quot;gd&quot;&gt;-	$ScoreTimer.stop()
&lt;/span&gt; 	$MobTimer.stop()
 	$HUD.show_game_over()
 	
&lt;span class=&quot;p&quot;&gt;@@ -47,13 +46,8 @@&lt;/span&gt; func _on_mob_timer_timeout():
 	# Spawn the mob by adding it to the Main scene.
 	add_child(mob)
 
&lt;span class=&quot;gd&quot;&gt;-func _on_score_timer_timeout():
-	score += 1
-	$HUD.update_score(score)
-
&lt;/span&gt; func _on_start_timer_timeout():
 	$MobTimer.start()
&lt;span class=&quot;gd&quot;&gt;-	$ScoreTimer.start()
&lt;/span&gt; 
 func _on_player_fire():
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;As part of the final polish we will also change the heads up display message to read
“Shoot the Creeps” instead of “Dodge the Creeps”. In the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;HUD&lt;/code&gt; scene, change the
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Text&lt;/code&gt; property of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Label&lt;/code&gt; node to read “Shoot the Creeps!”.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/stc-ss11.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;hud label&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Then change the HUD GDscript to update the message properly.&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gd&quot;&gt;--- a/HUD.gd
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+++ b/HUD.gd
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;@@ -12,7 +12,7 @@&lt;/span&gt; func show_game_over():
 	# Wait until the MessageTimer has counted down.
 	await $MessageTimer.timeout
 	
&lt;span class=&quot;gd&quot;&gt;-	$Message.text = &quot;Dodge the\nCreeps!&quot;
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+	$Message.text = &quot;Shoot the\nCreeps!&quot;
&lt;/span&gt; 	$Message.show()
 	# Make a one-shot timer and wait for it to finish.
 	await get_tree().create_timer(1.0).timeout
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;trying-it-out&quot;&gt;Trying it out!&lt;/h2&gt;

&lt;p&gt;Hopefully, if you’ve made it this far things are still working for you. If you are
having troubles getting your version to run properly, you can find a reference
that was the inspiration for this tutorial at &lt;a href=&quot;https://gitlab.com/elmiko/shoot-the-creeps&quot;&gt;gitlab.com/elmiko/shoot-the-creeps&lt;/a&gt;.
If you find errors in this tutorial or the associated source code, please do not
hesitate to &lt;a href=&quot;https://gitlab.com/elmiko/shoot-the-creeps/-/issues&quot;&gt;open an issue in the repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The game is fairly simple with the defaults that I chose, you can have some fun by changing
the bullet timer, velocity, and sizes of the mobs, player, and bullets. I wrote this
modification of the main tutorial to learn more about how to build a shooter style game. I
may not have chosen the most idiomatic methods for this implementation but I learned a lot
and have been inspired to try out my next project.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://godotengine.org&quot;&gt;Godot&lt;/a&gt; really is a tremendous platform for experimenting and developing games. I’ve
found it to inspire and empower my own personal hobby pursuit of making computer games.
I hope this tutorial has helped on your journey to building your passion games. Stay safe
out there, and as always, happy hacking!&lt;/p&gt;

</description>
				<pubDate>Sat, 26 Aug 2023 00:00:00 +0000</pubDate>
				<link>https://notes.elmiko.dev/2023/08/26/from-dodging-to-shooting.html</link>
				<guid isPermaLink="true">https://notes.elmiko.dev/2023/08/26/from-dodging-to-shooting.html</guid>
			</item>
		
			<item>
				<title>Exploring OpenShift Must Gather Data</title>
				<description>&lt;p&gt;One of the aspects of working for Red Hat, and on the OpenShift product, that I
get tremendous joy from are the activities that we have focused on empowering
associates to spend time contributing to open source projects and our communities.
Many companies have these types of agreements with their employees, Wikipedia
refers to it as &lt;a href=&quot;https://en.wikipedia.org/wiki/Side_project_time&quot;&gt;Side project time&lt;/a&gt;, most famously popularized by Google’s
&lt;a href=&quot;https://builtin.com/software-engineering-perspectives/20-percent-time&quot;&gt;“20 percent time”&lt;/a&gt;. At Red Hat we call these times “Hack n’ Hustle” or
“Shift week” depending on when they occur, but the primary goal is for us to
have time working on passion projects, or contributing to upstreams, or even
to be spending time in our physical communities giving back to those around
us.&lt;/p&gt;

&lt;p&gt;For awhile now, since at least April 2021 by GitHub’s record, I have been working
on a project that I call &lt;a href=&quot;https://github.com/elmiko/camgi.rs&quot;&gt;“camgi.rs”&lt;/a&gt; (originally &lt;a href=&quot;https://github.com/elmiko/okd-camgi&quot;&gt;“okd-camgi”&lt;/a&gt;). Which
originally stood for &lt;strong&gt;C&lt;/strong&gt;luster &lt;strong&gt;A&lt;/strong&gt;utoscaler &lt;strong&gt;M&lt;/strong&gt;ust &lt;strong&gt;G&lt;/strong&gt;ather &lt;strong&gt;I&lt;/strong&gt;nvestigator
(I guess it still does stand for that, we just don’t talk about it that way anymore). It was a
tool that I started to develop to help with the arduous process of understanding why the
&lt;a href=&quot;https://github.com/kubernetes/autoscaler&quot;&gt;cluster autoscaler&lt;/a&gt; had failed in a given scenario on OpenShift.&lt;/p&gt;

&lt;p&gt;Now, to set the stage a little more, there is a debugging tool that we use heavily
in OpenShift to help diagnose failures. That tool is called &lt;a href=&quot;https://github.com/openshift/must-gather&quot;&gt;“must-gather”&lt;/a&gt;, and
it produces a tarball full of all sorts of Kubernetes goodness; including log files,
manifests, and even audit logs from a subset of the cluster in question. Must gather
is very flexible and can be extended in many ways to add all sorts of custom
information, but I will save that for another post. The main point here is that
I wanted a visual way to quickly diagnose what was happening without having to open
a dozen YAML and log files. So, camgi was born.&lt;/p&gt;

&lt;h2 id=&quot;looking-at-a-must-gather-archive&quot;&gt;Looking at a must gather archive&lt;/h2&gt;

&lt;p&gt;At a very high level, the must gather archive contains a bunch of directories that
all have various bits of information about the cluster and it looks something like this:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ ls -l
total 6676
drwxr-xr-x. 1 mike mike     874 May 30 16:30 cluster-scoped-resources
drwxr-xr-x. 1 mike mike     142 May 30 16:30 etcd_info
-rw-r--r--. 1 mike mike 6824435 May 30 16:30 event-filter.html
drwxr-xr-x. 1 mike mike      14 May 30 16:30 host_service_logs
drwxr-xr-x. 1 mike mike      14 May 30 16:30 ingress_controllers
drwxr-xr-x. 1 mike mike      34 May 30 16:30 insights-data
drwxr-xr-x. 1 mike mike      44 May 30 16:30 monitoring
drwxr-xr-x. 1 mike mike    3332 May 30 16:30 namespaces
drwxr-xr-x. 1 mike mike     188 May 30 16:30 network_logs
drwxr-xr-x. 1 mike mike      66 May 30 16:30 pod_network_connectivity_check
drwxr-xr-x. 1 mike mike      28 May 30 16:30 static-pods
-rw-r--r--. 1 mike mike     550 May 30 16:30 timestamp
-rw-r--r--. 1 mike mike      78 May 30 16:28 version
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;That’s just the top directory, I ran a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;tree&lt;/code&gt; command on the archive and it says:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;2745 directories, 4381 files
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;whoa! That is way too much to reproduce in this post, but i guarantee it’s got a
ton of good stuff in there.&lt;/p&gt;

&lt;p&gt;To make things much simpler, I wanted a web page that I could use to browse around and
get a “bird’s eye view” of what is happening. So, that’s what I did. Today the camgi
output looks like this on the summary page:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/camgi-1.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;summary page&quot; /&gt;&lt;/p&gt;

&lt;p&gt;You can immediately see some information about the cluster state when the must gather
was created. In this case (from a CI run) we can see that 1 Machine is not in a running
state, and 2 ClusterOperators are having issues (one is not upgradeable, and another is
degraded or not available). This tool started to change the way I could debug things and
made it much quicker to find problems. It was also starting to have an affect on my
colleagues as they started to ask for more features and custom resources to be added.&lt;/p&gt;

&lt;p&gt;Diving a little deeper, we can see how this tool can be used to explore resource and log
data.&lt;/p&gt;

&lt;div style=&quot;padding:56.34% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/836441744?badge=0&amp;amp;autopause=0&amp;amp;player_id=0&amp;amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture&quot; allowfullscreen=&quot;&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;camgi 0.9.0 demo&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;

&lt;h2 id=&quot;project-history-and-design-goals&quot;&gt;Project history and design goals&lt;/h2&gt;

&lt;p&gt;When I first started planning this project I was inspired by the work of a colleague
who had started an HTML-based display (&lt;em&gt;shoutout to Mike G!&lt;/em&gt;), and also by the inclusion of
a different HTML file in the must gather output. You might have noticed in the directory listing
above a file name &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;event-filter.html&lt;/code&gt;. This file is a static HTML file with all the data included
within the page. You can use it to search and filter the events which were emitted from the cluster
during the capute period. I thought this was really cool, although I also acknowledge it’s
not the most frugal way to create an HTML page (more on this later).&lt;/p&gt;

&lt;p&gt;So I went to my &lt;a href=&quot;https://notes.elmiko.dev/2022/12/18/why-i-keep-python-in-the-tool-box.html&quot;&gt;old favorite tool Python&lt;/a&gt; to begin hacking up a static page to contain
all the data I wanted to highlight.  This allowed me to rapidly prototype as I was able to use
modules like Jinja, PyYaml, and the standard library to quickly manipulate the text data. But as
I got requests to include the output into our build systems for continuous integration it
became apparent that including all the necessary Python modules was going to be very
difficult. It was at this point that I decided to re-write the project in Rust so that it could
be built as a binary for distribution. I chose Rust because I wanted to learn more about the language
and this seemed like a perfect opportunity.&lt;/p&gt;

&lt;p&gt;After several months of development, I was able to release the new tool and get it included in our
CI infrastructure as I could now have it downloaded during runtime. This process was difficult
as I had to ensure that my builds would be usable within the containers that our
CI uses to generate must gather artifacts. This was a trip down memory lane as I was fighting with
glibc incompatibilities that really brought me back to my early C days. But finally, it was done
and I was able to have it included in the output, which you can see today if you find the Prow output
from a CI run on our many repositories.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/camgi-2.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;camgi in prow&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Having a single output file from the tool makes it very simple to include the artifact in whatever
format we choose. Although it would be more efficient to have some sort of HTTP server hosting the
files from the must gather, this adds a lot of overhead for how it is used and confines the way
it can be included in other places. It does produce quite large files sometimes, especially when
investigating clusters with many nodes that have been active for a long time. But usually the files are only
being generated locally in those cases, so we aren’t passing around 500Mb HTML files, &lt;em&gt;usually&lt;/em&gt;… XD&lt;/p&gt;

&lt;h2 id=&quot;operating-camgi&quot;&gt;Operating camgi&lt;/h2&gt;

&lt;p&gt;Camgi itself is quite easy to install and operate. You can either get a binary release for Linux x86_64
targets from the &lt;a href=&quot;https://github.com/elmiko/camgi.rs/releases&quot;&gt;releases page on GitHub&lt;/a&gt;, install it directly from &lt;a href=&quot;https://github.com/elmiko/camgi.rs&quot;&gt;source&lt;/a&gt; by cloning
the repo and running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cargo build&lt;/code&gt;, or by installing from &lt;a href=&quot;https://crates.io/crates/camgi&quot;&gt;crates.io&lt;/a&gt; by running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cargo install camgi&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once installed simply run the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;camgi&lt;/code&gt; command with your must gather archive as a target, such as:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;[mike@ultra] ~/Downloads/my-must-gather
$ camgi must-gather.local/ &amp;gt; camgi.html
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then open the resulting file in the browser of your choice.&lt;/p&gt;

&lt;h2 id=&quot;release-090-and-the-future&quot;&gt;Release 0.9.0 and the future…&lt;/h2&gt;

&lt;p&gt;We are currently winding down the latest Shift week at Red Hat, and as part of my activities I
have added some new features and created the &lt;a href=&quot;https://github.com/elmiko/camgi.rs/releases/tag/v0.9.0&quot;&gt;0.9.0 release of camgi&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As part of my development process I have been &lt;a href=&quot;https://github.com/elmiko/camgi.rs/issues&quot;&gt;opening issues&lt;/a&gt; and &lt;a href=&quot;https://github.com/elmiko/camgi.rs/graphs/contributors&quot;&gt;inviting my colleagues&lt;/a&gt;
to help in the construction of camgi. Even though I don’t spend every week working on camgi, creating
issues and reaching out to my peers for advice and guidance has been a tremendous help. When people
ask for new features, find bugs, or identify areas of improvement I quickly open an issue to remember
what has been asked. In this manner I help myself out for the future and maintain a nice queue of things
to hack on, it’s been a tremendous experience for me.&lt;/p&gt;

&lt;p&gt;I mentioned earlier that creating a giant static HTML file is not the most frugal way to handle this
activity. One thing that I would really like to solve for the future is reducing the size of the log
files that are included as I notice that sometimes the browser really has to crunch to make things work.
This is one of my top goals for the future, but I still have some learning to do so that I can achieve
it in a way that is convenient for people to access the full log files. We’ll see how it goes.&lt;/p&gt;

&lt;p&gt;If you’ve made it this far, I hope this tale has at least been interesting and perhaps even inspired you
to build your own projects or get involved with other collaborators. For me, I will be coding away on the
&lt;em&gt;way too many&lt;/em&gt; side projects I have and looking for ways to contribute back and become more involved with
the open source community at large. And so, as always, stay safe out there and happy hacking =)&lt;/p&gt;

</description>
				<pubDate>Thu, 15 Jun 2023 00:00:00 +0000</pubDate>
				<link>https://notes.elmiko.dev/2023/06/15/exploring-openshift-must-gather-data.html</link>
				<guid isPermaLink="true">https://notes.elmiko.dev/2023/06/15/exploring-openshift-must-gather-data.html</guid>
			</item>
		
			<item>
				<title>Diving Deeper into Cluster API Testing</title>
				<description>&lt;p&gt;Recently I had the opportunity to spend some time reviewing and deep diving
into the &lt;a href=&quot;https://cluster-api.sigs.k8s.io&quot;&gt;Cluster API&lt;/a&gt; end-to-end test suite with the guidance of
&lt;a href=&quot;https://github.com/fabriziopandini&quot;&gt;Fabrizio Pandini&lt;/a&gt;. He has been crafting a change to the
&lt;a href=&quot;https://github.com/kubernetes-sigs/cluster-api-provider-kubemark&quot;&gt;Kubemark provider&lt;/a&gt; that will integrate the
&lt;a href=&quot;https://cluster-api.sigs.k8s.io/developer/e2e.html&quot;&gt;Cluster API E2E test framework&lt;/a&gt; so that we can more easily
develop tests that utilize the Kind + Cluster API + Kubemark configurations
that I have &lt;a href=&quot;https://notes.elmiko.dev/2021/10/11/setup-dev-capi-kubemark.html&quot;&gt;mentioned&lt;/a&gt; a &lt;a href=&quot;https://notes.elmiko.dev/2023/01/21/automating-my-hollow-kubernetes-test-rig.html&quot;&gt;few times&lt;/a&gt; in the past. We paired
up so that I could better understand the test framework and to talk about
debugging the pull request.&lt;/p&gt;

&lt;p&gt;The integration patch can be seen here: &lt;a href=&quot;https://github.com/kubernetes-sigs/cluster-api-provider-kubemark/pull/69&quot;&gt;kubernetes-sigs/cluster-api-provider-kubemark#69&lt;/a&gt;,
and Farbizio was also kind enough to let me record our deep dive so that we could share it
with the wider community:&lt;/p&gt;

&lt;div style=&quot;margin: 1em;&quot;&gt;
&lt;iframe class=&quot;center-block&quot; width=&quot;640&quot; height=&quot;480&quot; src=&quot;https://www.youtube-nocookie.com/embed/KU7i4TfD1tg&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;
&lt;/div&gt;

&lt;p&gt;All the code that was shown in the video is available in the pull request linked above,
the &lt;a href=&quot;https://github.com/kubernetes-sigs/cluster-api-provider-kubemark&quot;&gt;cluster-api-provider-kubemark&lt;/a&gt; repository, and the &lt;a href=&quot;https://github.com/kubernetes-sigs/cluster-api&quot;&gt;cluster-api&lt;/a&gt;
repository.&lt;/p&gt;

&lt;p&gt;If you are curious about the Tilt configuration we talk about, please see the
&lt;a href=&quot;https://cluster-api.sigs.k8s.io/developer/tilt.html&quot;&gt;Developing Cluster API with Tilt&lt;/a&gt; page of the documentation. And if you have
been following my &lt;a href=&quot;https://github.com/elmiko/cluster-api-kubemark-ansible&quot;&gt;Cluster API Kubemark Ansible&lt;/a&gt; efforts that I mentioned in the
&lt;a href=&quot;https://notes.elmiko.dev/2023/01/21/automating-my-hollow-kubernetes-test-rig.html&quot;&gt;previous post&lt;/a&gt;, I have also added a new playbook for installing the Tilt server
as well.&lt;/p&gt;

&lt;p&gt;One of the things I love about open source software and the culture that has evolved with it
are the people and the communities behind the monitors. I want to give my special thanks and
gratitude to Fabrizio for being a great mentor and collaborator, and to the rest of the Cluster
API community for being awesome in general and for creating a warm and welcoming place to
share a passion for technology.&lt;/p&gt;

&lt;p&gt;as always, happy hacking =)&lt;/p&gt;

</description>
				<pubDate>Tue, 28 Feb 2023 00:00:00 +0000</pubDate>
				<link>https://notes.elmiko.dev/2023/02/28/diving-deeper-into-cluster-api-testing.html</link>
				<guid isPermaLink="true">https://notes.elmiko.dev/2023/02/28/diving-deeper-into-cluster-api-testing.html</guid>
			</item>
		
			<item>
				<title>Automating My Hollow Kubernetes Test Rig</title>
				<description>&lt;p&gt;&lt;em&gt;special thanks to José Castillo Lema for helping me to improve these scripts&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Way back in October of 2021, I wrote a post about
&lt;a href=&quot;https://notes.elmiko.dev/2021/10/11/setup-dev-capi-kubemark.html&quot;&gt;Setting Up a Development Environment for the Cluster API Kubemark Provider&lt;/a&gt;.
In that piece I explained how I’m configuring &lt;a href=&quot;https://kind.sigs.k8s.io&quot;&gt;Kind&lt;/a&gt; with &lt;a href=&quot;https://cluster-api.sigs.k8s.io&quot;&gt;Cluster API&lt;/a&gt; and
the &lt;a href=&quot;https://github.com/kubernetes-sigs/cluster-api-provider-kubemark&quot;&gt;Kubemark provider&lt;/a&gt; to create “hollow” &lt;a href=&quot;https://kubernetes.io&quot;&gt;Kubernetes&lt;/a&gt; clusters. In the time
since then, I’ve converted those instructions into a set of &lt;a href=&quot;https://www.ansible.com/&quot;&gt;Ansible&lt;/a&gt;
playbooks and helper scripts which make the automation of this process very easy.
So, without further ado, let’s look at how to deploy a virtual server for running
hollow Kubernetes scale tests.&lt;/p&gt;

&lt;h2 id=&quot;versions&quot;&gt;Versions&lt;/h2&gt;

&lt;p&gt;Before we get rolling though, these are the versions I am using at the time of
writing. I cannot guarantee that these will work in the future, but for as long as I
continue to maintain these repositories they should be updated over time. Reader be aware.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Kubernetes 1.25.3&lt;/li&gt;
  &lt;li&gt;Cluster API 1.3.1&lt;/li&gt;
  &lt;li&gt;Cluster API Kubemark Provider 0.5.0&lt;/li&gt;
  &lt;li&gt;Ubuntu 22.04 Server&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;process&quot;&gt;Process&lt;/h2&gt;

&lt;p&gt;I’m going to walk through this from the ground up. I will start by creating a
virtual machine, using Ansible to update it and build the Kubernetes bits, and
finally how to work my helper scripts to create clusters.&lt;/p&gt;

&lt;h3 id=&quot;creating-a-virtual-machine&quot;&gt;Creating a Virtual Machine&lt;/h3&gt;

&lt;p&gt;I’m using &lt;a href=&quot;https://fedoraproject.org&quot;&gt;Fedora&lt;/a&gt; as my host operating system with the default Gnome
desktop environment installed. Gnome comes with &lt;a href=&quot;https://help.gnome.org/users/gnome-boxes/stable/&quot;&gt;Boxes&lt;/a&gt; as the main
graphical application for managing virtual machines. Although my host is Fedora,
I like to use Ubuntu for the virtual machine because the Docker integration is
a little easier for me. Kind does work with Podman but I have had
issues getting the Cluster API Docker provider to work with it. An improvement I would like
to make to this process is to automate the virtual machine creation process by using a
script that talks directly to the host hypervisor, or perhaps using a cloud image.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/boxes-1.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;creating a vm with boxes&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I usually create a virtual machine with 16 GB of RAM and 64 GB of hard drive space,
this is enough for me to test small clusters with up to a few dozen nodes.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/boxes-2.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;selecting vm size&quot; /&gt;&lt;/p&gt;

&lt;p&gt;One thing I find really convenient about the Ubuntu installer is the ability
to pull my SSH keys from GitHub.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/ubuntu-install-1.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;ubuntu ssh key install&quot; /&gt;&lt;/p&gt;

&lt;p&gt;At this point I like to stop the virtual machine and make a snapshot. This allows
me to quickly reset the instance back to a semi-pristine state if I feel like
installing different versions of the tooling, or just want a &lt;em&gt;blank slate&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/boxes-3.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;making a snapshot&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;installing-the-tooling&quot;&gt;Installing the Tooling&lt;/h2&gt;

&lt;p&gt;Once the virtual machine is created and rebooted, and I ensure I can login with
SSH, I clone my &lt;a href=&quot;https://github.com/elmiko/cluster-api-kubemark-ansible&quot;&gt;Cluster API Kubemark Ansible&lt;/a&gt; repository
to my Fedora host. This repository contains a couple playbooks; one for installing
the toolchain, and another for building the Cluster API binaries. The first thing
I do is copy the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inventory&lt;/code&gt; directory to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;inventory.local&lt;/code&gt; and then edit the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;hosts&lt;/code&gt;
file to look like this:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;all:
  hosts:
    192.168.122.165
  vars:
    devel_user: mike
    cluster_api_repo: https://github.com/kubernetes-sigs/cluster-api.git
    cluster_api_version: v1.3.1
    provider_kubemark_repo: https://github.com/kubernetes-sigs/cluster-api-provider-kubemark.git
    provider_kubemark_version: v0.5.0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;hosts&lt;/code&gt; file shows that my virtual machine is at &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;192.168.122.165&lt;/code&gt;, I will login
as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;mike&lt;/code&gt;, and the playbooks will install Cluster API version &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;v1.3.1&lt;/code&gt; and Kubemark
provider version &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;v0.5.0&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;After updating the inventory, I run the command to execute the setup playbook. Keep
in mind that the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-K&lt;/code&gt; command line flag will ask for a password to become root. The
command looks like this:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ ansible-playbook -K -i inventory.local setup_devel_environment.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This will run for 10-15 minutes depending on connection speeds and local variables,
but when it finishes it should look like this (yes, I need to investigate that
deprecation warning):&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/ansible-1.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;ansible setup results&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Something to note about this step of the process is that a container will be started
on the virtual machine to host a Docker registry. This container is used by Kind
so that local images can be quickly pushed into the running Kubernetes clusters.
To access it you need to tag container images as belonging to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;localhost:5000/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/docker-1.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;docker container running&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;building-cluster-api&quot;&gt;Building Cluster API&lt;/h2&gt;

&lt;p&gt;Now that the toolchain is setup to build Go code and container images, I want to
install the Cluster API project, Cluster API Kubemark provider, and then build
everything. To start the process I use this Ansible command, note that this does
not need root access:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ ansible-playbook -i inventory.local build_clusterctl_and_images.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Like the previous playbook, this could also take 10-15 minutes depending on resources.
When finished it should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/ansible-2.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;ansible build results&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;preparing-for-launch&quot;&gt;Preparing for Launch&lt;/h2&gt;

&lt;p&gt;The last step in my process is to install my &lt;a href=&quot;https://github.com/elmiko/capi-hacks&quot;&gt;CAPI Hacks&lt;/a&gt; on the virtual
machine. These are a set of convenience scripts and Kubernetes manifests that I
use regularly to make the process of starting new clusters easier. Let’s look at
the files I use most frequently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;01-kind-mgmt-config.yaml&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the configuration file for the Kind management cluster, it sets up a couple
things including the Kubernetes version, local Docker socket location, and patch
for the local registry. Usually the only reason to change this file is when updating
the Kubernetes version.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Cluster&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;mgmt&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;networking&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;apiServerAddress&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;127.0.0.1&quot;&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;nodes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
&lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;role&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;control-plane&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;docker.io/kindest/node:v1.25.3&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;extraMounts&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;hostPath&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/var/run/docker.sock&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;containerPath&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/var/run/docker.sock&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;containerdConfigPatches&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
&lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|-&lt;/span&gt;
  &lt;span class=&quot;s&quot;&gt;[plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.mirrors.&quot;localhost:5000&quot;]&lt;/span&gt;
    &lt;span class=&quot;s&quot;&gt;endpoint = [&quot;http://kind-registry:5000&quot;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;01-start-mgmt-cluster.sh&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A wrapper to start the Kind cluster used by Cluster API as the management cluster.
It will be named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;mgmt&lt;/code&gt; in Kind. Running this command should look like:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/capi-hacks-1.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;start the mgmt cluster&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;02-apply-localregistryhosting-configmap.sh&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Add the local registry to the management cluster. This could probably be rolled
into the previous script, but just in case you don’t want the local registry it
is separate. Running this command is relatively uninteresting, but should look
like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/capi-hacks-2.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;setup the registry&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;03-clusterctl-init.sh&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Install Cluster API into the management cluster using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;clusterctl&lt;/code&gt; command line
tool. This file also contains the version information for the local Cluster API and
Kubemark provider information. If it runs successfully it should look similar to
this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/capi-hacks-3.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;installing capi&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I usually confirm that things are working by checking all the pods on the management
cluster.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/capi-hacks-4.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;checking pods&quot; /&gt;&lt;/p&gt;

&lt;p&gt;If things don’t go well, you might see an error like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/capi-hacks-5.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;failed capi install&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This error shows us that we have a version mismatch between the expected and the found
versions for Cluster API. In cases like this either the version should be changed
in the script, or the configuration in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;$HOME/.cluster-api&lt;/code&gt; directory should
be checked.&lt;/p&gt;

&lt;h2 id=&quot;launching-kubemark-clusters&quot;&gt;Launching Kubemark Clusters&lt;/h2&gt;

&lt;p&gt;Finally, the moment has arrived. We are ready to start deploying clusters. The
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubemark&lt;/code&gt; directory of the &lt;a href=&quot;https://github.com/elmiko/capi-hacks&quot;&gt;CAPI Hacks&lt;/a&gt; contains some pre-formatted
manifests for deploying clusters. I start by creating the objects in the
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubemark-workload-control-plane.yaml&lt;/code&gt; manifest file, this will create a new cluster
with a single Docker Machine to host the control plane. I am using a Docker Machine
here so that the control plane pods will actually run. After running
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl create -f kubemark-workload-control-plane.yaml&lt;/code&gt;, I watch the Machine objects
until I see the control plane is &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Running&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/capi-hacks-6.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;watching control plane machines&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Next I apply a Container Network Interface (CNI) provider to the new workload cluster
to ensure that the nodes of the cluster can become fully ready. I use the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;deploy-cni.sh&lt;/code&gt;
script to add Calico as the CNI provider (there is also a script to deploy OVN Kubernetes).
I also use the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;get-kubeconfig.sh&lt;/code&gt; script to make managing the kubeconfig files a little
easier. When successful it looks something like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/capi-hacks-7.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;deploying CNI&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Lastly, I create the workload cluster compute nodes by running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl create -f kubemark-workload-md0.yaml&lt;/code&gt;.
This manifest contains the Cluster API objects for the MachineDeployment and related
infrastructure resources to add Kubemark Machines to our workload cluster.
Kubemark is so fast to load that within a 5-10 seconds I have all the new machines
in a running state:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/capi-hacks-8.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;examine kubemark&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Inspecting the pods on the workload cluster you might note that the Calico containers
assigned to Kubemark nodes are stuck at the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Init:0/3&lt;/code&gt; status. I’m not quite sure
why this happens but I suspect it is an artifact of Kubemark, I’d like to investigate
further but for the time being it does not seem to cause problems with testing.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/capi-hacks-9.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;examine pods&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;to-the-moon&quot;&gt;To the Moon!&lt;/h2&gt;

&lt;p&gt;The workflow is now nearly completely automated, or at least reduced to a much simpler
series of commands. I have also found some allies along the way as people have shared
suggestions, bug fixes, and improvements with me through my
&lt;a href=&quot;https://github.com/elmiko/cluster-api-kubemark-ansible&quot;&gt;Cluster API Kubemark Ansible playbook&lt;/a&gt;
and &lt;a href=&quot;https://github.com/elmiko/capi-hacks&quot;&gt;CAPI Hacks&lt;/a&gt; repositories.&lt;/p&gt;

&lt;p&gt;There are more scripts and helpers inside the &lt;a href=&quot;https://github.com/elmiko/capi-hacks&quot;&gt;CAPI Hacks&lt;/a&gt;, after setting up
clusters I tend to use the Cluster Autoscaler scripts to test the scaling mechanisms
of that code. I am also learning about others using a similar workflow to test the
inner working of Cluster API as well.&lt;/p&gt;

&lt;p&gt;If you’ve made it this far, thank you, I hope you’ve learned a little more about how
to setup virtualized testing environments for Kubernetes, and maybe even tried it out
for yourself. If you have ideas or suggestions, or just want to chat about how to
get Cluster API and Kubemark working better, open an issue on one of those repositories
or come find me on the &lt;a href=&quot;https://kubernetes.slack.com&quot;&gt;Kubernetes Slack&lt;/a&gt; instance as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;@elmiko&lt;/code&gt;, and until next time happy hacking =)&lt;/p&gt;

</description>
				<pubDate>Sat, 21 Jan 2023 00:00:00 +0000</pubDate>
				<link>https://notes.elmiko.dev/2023/01/21/automating-my-hollow-kubernetes-test-rig.html</link>
				<guid isPermaLink="true">https://notes.elmiko.dev/2023/01/21/automating-my-hollow-kubernetes-test-rig.html</guid>
			</item>
		
			<item>
				<title>Why I Keep Python In the Tool Box</title>
				<description>&lt;p&gt;I started learning the &lt;a href=&quot;https://python.org&quot;&gt;Python language&lt;/a&gt; back in the late
2000s while I was working at a company writing global positioning software for
in-car navigation. It was a fun job and a great team to work with, and I had
rejoined them after a year hiatus to lead an effort of converting the software
stack from Windows CE to Linux. We were also building a new hardware platform,
but I was always on the software side and left the physical bits to my capable
colleagues. I was able to use the language to great effect when creating a
D-Bus interface library that our application developers could use to communicate
between host apps, and again when creating a cross-compiling harness to build
entire ARM root filesystems (this was in the days when Yocto and Buildroot were
young).&lt;/p&gt;

&lt;p&gt;Over the years my love of Python continued and deepened as I learned to use it
for writing all sorts of applications, eventually working on the OpenStack project
(which was pretty much all server side Python).  I’d venture that next
to nearly two decades of C experience, Python is the second best language that
I know. Which brings me to today’s example.&lt;/p&gt;

&lt;h2 id=&quot;making-a-molehill-out-of-a-mountain&quot;&gt;Making a molehill out of a mountain&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/img/molehill.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;molehill&quot; /&gt;
&lt;span class=&quot;pull-right&quot;&gt;&lt;em&gt;&lt;a href=&quot;https://www.google.com/profiles/dieder.plu&quot;&gt;Dieder Plu&lt;/a&gt;, &lt;a href=&quot;https://creativecommons.org/licenses/by-sa/3.0/deed.en&quot;&gt;CC-BY-SA 3.0&lt;/a&gt;&lt;/em&gt;&lt;/span&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;One of the areas of Kubernetes that the team I’m on at Red Hat maintains for
OpenShift are the &lt;a href=&quot;https://kubernetes.io/docs/concepts/architecture/cloud-controller/&quot;&gt;Cloud Controller Managers&lt;/a&gt;.
These are a set of Kubernetes controllers that run in-cluster to help make
integrating with the underlying infrastructure smoother. As you might imagine
each one of these controllers is written specifically for a single infrastructure
provider. In the past this code had all been integrated into the main Kubernetes
code repository, but as maintaining these bits in a common place does not scale
well with the addition of ever more providers, there has been an effort to
&lt;a href=&quot;https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers&quot;&gt;remove them from the main source repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As part of my work at Red Hat, and with the Kubernetes community, I have been
investigating ways that we can grow the testing coverage for these new external
cloud controller managers. One of the things I would like to do, if possible, is
create a way for each provider to write their own interface implementation which
would allow utilizing a central set of tests for all providers, current and future.
To that end I have been browsing the upstream end-to-end tests which
&lt;a href=&quot;https://github.com/kubernetes/kubernetes/tree/master/test/e2e&quot;&gt;exist in the Kubernetes repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One of the core pieces of functionality for cloud controller managers is watching
&lt;a href=&quot;https://kubernetes.io/docs/concepts/services-networking/service/&quot;&gt;Kubernetes Services&lt;/a&gt;
and ensuring that they are backed by a load balancer (where applicable). These
tests are scattered throughout the Kubernetes end-to-end tests and I wanted to
find a convenient way to find them all. A quick suggestion from &lt;a href=&quot;https://github.com/andrewsykim&quot;&gt;Andrew&lt;/a&gt;
, one of the SIG Cloud Provider chairs, was to use the &lt;a href=&quot;https://github.com/onsi/ginkgo&quot;&gt;Ginkgo&lt;/a&gt;
binary tool with regular expression to find the tests. Which turned out to be a
great suggestion because with a quick command line I was able to parse all the
descriptions into a formatted JSON file. The command looked like this:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;ginkgo &lt;span class=&quot;nt&quot;&gt;-r&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--dry-run&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--focus&lt;/span&gt; .&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;sS]ervice.&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--json-report&lt;/span&gt; ./service-tests.json &lt;span class=&quot;nt&quot;&gt;--keep-going&lt;/span&gt; kubernetes/test/e2e/...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I ran this command from the parent directory of the Kubernetes repository on
my local host. What it does is to do a “dry run” of all the tests, recursing
through directories, focusing only on tests with the regular expression
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.*[sS]ervice.*&lt;/code&gt; in their hierarchy text (the stuff in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;It(&quot;does stuff&quot;)&lt;/code&gt; clauses and whatnot),
and then writes the output to a file named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;service-tests.json&lt;/code&gt;. All while
continuing past any failures.&lt;/p&gt;

&lt;p&gt;After running this command, I end up with a huge JSON file:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;-rw-r--r--. 1 mike mike 9.1M Dec  8 14:57 service-tests.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;and looking inside it doesn’t get much better:&lt;/p&gt;
&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;ContainerHierarchyTexts&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;null&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;ContainerHierarchyLocations&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;null&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;ContainerHierarchyLabels&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;null&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;LeafNodeType&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;SynchronizedBeforeSuite&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;LeafNodeLocation&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;FileName&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;/home/mike/dev/kubernetes/test/e2e/e2e.go&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;LineNumber&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;77&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;},&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;LeafNodeLabels&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;null&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;LeafNodeText&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;State&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;passed&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;StartTime&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;0001-01-01T00:00:00Z&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;EndTime&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;0001-01-01T00:00:00Z&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;RunTime&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;ParallelProcess&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;NumAttempts&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;MaxFlakeAttempts&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;MaxMustPassRepeatedly&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;it’s 278000 some odd lines of those entries. This is gonna take a while…&lt;/p&gt;

&lt;h2 id=&quot;the-serpent-lurking-in-the-jungle&quot;&gt;The serpent lurking in the jungle&lt;/h2&gt;

&lt;p&gt;As I was staring at these entries, starting to get a little cross-eyed, I wondered
if I might use a script or something to pull all the files and line numbers out,
maybe associated with their titles. Just something to pair down the raw the data
in the file. Then inspiration struck me, I could write a small Python application
which could create an HTML page with links to all the test files. I could then
use my browser to at least parse things in a more convenient manner.&lt;/p&gt;

&lt;p&gt;The architecture I was imagining looked something like this:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;   tests.json
       |                       +-----------------------------+
       v                   +-&amp;gt; | http://localhost/index.html |
  +--------------------+   |   +-----------------------------+
  | Python http.server | --+
  +--------------------+   |   +--------------------------------------+
       ^                   +-&amp;gt; | kubernetes.git/test/e2e/framework.go |
       |                       +--------------------------------------+
  kubernetes.git/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;where the Python HTTP serving is running from my local Kubernetes directory,
taking a Ginkgo output JSON file as input, and serving up an index page and
source files. I knew Python had all the necessary building blocks in the standard
library; JSON processors, HTTP servers, plenty of string formatting options.&lt;/p&gt;

&lt;p&gt;I didn’t want to get too complicated as I realized two things; I didn’t want to
spend more than a couple hours putting it together, and I didn’t want extra
dependencies, only the Python standard libraries. My reasoning for the first was
that any extra time spent hacking on this tool added to the total time for the
investigation and I was very sensitive about not getting lost in a tool sharpening
exercise. The second reason was that I didn’t want to contend with any sort of
virtual environments or other packaging tools. I knew that all the building
blocks I needed were in the standard library, if I was shrewd I could do this
without installing extra helpers (no matter how nice they are!).&lt;/p&gt;

&lt;p&gt;What I ended up with is something I call &lt;a href=&quot;https://gitlab.com/elmiko/biloba.py&quot;&gt;biloba.py&lt;/a&gt;
, named after the &lt;a href=&quot;https://en.wikipedia.org/wiki/Ginkgo_biloba&quot;&gt;humble tree&lt;/a&gt;, which
at 170-ish lines of Python is one of the more compact but useful applications
I’ve written, and I’m also quite proud of myself for accomplishing all the goals.
It uses only the standard library, and serves a web page built from the entries
in the source JSON. The main index has links which open into separate tabs
that take you directly to the source for the test. It’s fairly minimal, but allowed
me to take that list of tests and look through all of them within the span of
about a week. The output looks like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/biloba-py-index.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;index page from biloba.py&quot; /&gt;&lt;/p&gt;

&lt;p&gt;and the links to the code open into new tabs that are fairly plain:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/img/biloba-py-code.png&quot; class=&quot;img-responsive center-block&quot; alt=&quot;code page from biloba.py&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Simple, but effective.&lt;/p&gt;

&lt;h2 id=&quot;what-does-it-do&quot;&gt;What does it do?&lt;/h2&gt;

&lt;p&gt;Since it’s so small, let’s take a look at some of the choices I made and perhaps
I can give some of my reasoning. Before we get started though, I’d like to
acknowledge at the outset that Python is a dynamically typed language (although it does
have options for static typing), and as such I tend to use it as a way to &lt;em&gt;sketch out&lt;/em&gt;
applications quickly. I like it’s pseudo-code style and the dynamic typing allows
me to run quickly with scissors, this might not be to every person’s liking and I
acknowledge that bias at the outset.&lt;/p&gt;

&lt;h3 id=&quot;html&quot;&gt;HTML&lt;/h3&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;html_template&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;&apos;&apos;
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;
&amp;lt;title&amp;gt;Biloba&amp;lt;/title&amp;gt;
&amp;lt;style&amp;gt;
span.highlight {
    background: #bababa;
    display: block;
}
&amp;lt;/style&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
{body}
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&apos;&apos;&apos;&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;index&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;html_template&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;format&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;body&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;Not generated&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The first part here establishes a template that I will use to create the wrapper
page that holds all the other pages. I can reuse this for the index and for the
code pages. It is also marked up for Python’s
&lt;a href=&quot;https://docs.python.org/3/library/string.html#formatstrings&quot;&gt;format string syntax&lt;/a&gt;,
which makes it convenient for that reuse. I also declare a global variable for
the index page so that I can have a value to know if things did not load properly,
and to reuse with the handler and main functions.&lt;/p&gt;

&lt;h3 id=&quot;data-helpers&quot;&gt;Data Helpers&lt;/h3&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;Suite&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;__init__&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;suite&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
        &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;reports&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[]&lt;/span&gt;
        &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;description&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;suite&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;SuiteDescription&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;description&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;is&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;None&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;or&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;description&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;description&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;No suite description set&apos;&lt;/span&gt;

        &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;path&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;suite&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;SuitePath&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;path&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;is&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;None&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;or&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;path&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;No suite path set&apos;&lt;/span&gt;

        &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;report&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;enumerate&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;suite&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;SpecReports&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[])):&lt;/span&gt;
            &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;State&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;passed&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
                &lt;span class=&quot;k&quot;&gt;continue&lt;/span&gt;

            &lt;span class=&quot;k&quot;&gt;try&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;newreport&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;logging&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;info&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sa&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;processed report for &lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;newreport&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;filename&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;@&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;newreport&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;linenumber&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
                &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;append_report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;newreport&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
            &lt;span class=&quot;k&quot;&gt;except&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;Exception&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ex&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;nl&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;LeafNodeLocation&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{})&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;fn&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;nl&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;FileName&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;ln&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;nl&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;LineNumber&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;logging&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;error&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sa&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;error processing report for &lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;fn&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;@&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;ln&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;

    &lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;append_report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
        &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;reports&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;append&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
        &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;reports&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sorted&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;reports&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;key&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;lambda&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;r&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;r&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;hierarchy&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;


&lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;Report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;__init__&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;hierarchy&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;ContainerHierarchyTexts&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;hierarchy&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;is&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;None&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;or&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;hierarchy&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;hierarchy&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;No hierarchy defined&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;hierarchy&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos; / &apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;join&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;hierarchy&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;hierarchy&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;hierarchy&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;No hierarchy text set&apos;&lt;/span&gt;
        &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;hierarchy&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;hierarchy&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;logging&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;debug&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;hierarchy&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;

        &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;text&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;LeafNodeText&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;text&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;is&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;None&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;text&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;No leaf node text set&apos;&lt;/span&gt;

        &lt;span class=&quot;n&quot;&gt;leafnodeloc&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;LeafNodeLocation&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{})&lt;/span&gt;
        &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;filename&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;leafnodeloc&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;FileName&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
        &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;linenumber&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;leafnodeloc&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;LineNumber&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;

        &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;nodetype&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;LeafNodeType&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;These next two classes, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Suite&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Report&lt;/code&gt;, are convenience wrappers that
allow me to transform from the JSON format to an API that I can use when
generating HTML pages. Where possible I try to have it fail gracefully with
default messages that are easy to spot in the generated HTML content. I also
combine the hierarchy text into a more readable format and save the paths to
the individual source files with line numbers.&lt;/p&gt;

&lt;h3 id=&quot;http&quot;&gt;HTTP&lt;/h3&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;BilobaHttpRequestHandler&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;http&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;server&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;SimpleHTTPRequestHandler&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;do_GET&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;logging&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;info&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;path&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;/&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;content&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;index&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;body&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;encode&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;UTF-8&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;replace&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
            &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;send_response&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;HTTPStatus&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;OK&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
            &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;send_header&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;Content-Type&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;text/html&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
            &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;send_header&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;Content-Length&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;str&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;body&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)))&lt;/span&gt;
            &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;end_headers&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
            &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;wfile&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;write&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;body&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;elif&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;favicon&apos;&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;nb&quot;&gt;super&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;do_GET&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;else&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;c1&quot;&gt;# if not the index, then try to load the file and inject in an html wrapper
&lt;/span&gt;            &lt;span class=&quot;k&quot;&gt;try&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;param&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;split&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;?&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;maxsplit&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;linenumber&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;int&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;param&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;split&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;=&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;maxsplit&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)[&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;])&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;logging&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;debug&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sa&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;attempting to load &lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;, highlighting linenumber &lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;linenumber&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;content&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;&amp;lt;pre&amp;gt;&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&lt;/span&gt;
                &lt;span class=&quot;k&quot;&gt;with&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;open&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;fp&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
                    &lt;span class=&quot;n&quot;&gt;lines&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;fp&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;read&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;splitlines&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
                    &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;line&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;enumerate&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;lines&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
                        &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;+&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;linenumber&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
                            &lt;span class=&quot;n&quot;&gt;content&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+=&lt;/span&gt; &lt;span class=&quot;sa&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&amp;lt;span id=&quot;highlighted-test&quot; class=&quot;highlight&quot;&amp;gt;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;line&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;lt;/span&amp;gt;&apos;&lt;/span&gt;
                        &lt;span class=&quot;k&quot;&gt;else&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
                            &lt;span class=&quot;n&quot;&gt;content&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;line&lt;/span&gt;
                        &lt;span class=&quot;n&quot;&gt;content&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;content&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;&amp;lt;/pre&amp;gt;&apos;&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;content&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;html_template&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;format&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;body&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;body&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;encode&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;UTF-8&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;replace&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
                &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;send_response&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;HTTPStatus&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;OK&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
                &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;send_header&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;Content-Type&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;text/html&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
                &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;send_header&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;Content-Length&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;str&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;body&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)))&lt;/span&gt;
                &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;end_headers&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
                &lt;span class=&quot;bp&quot;&gt;self&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;wfile&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;write&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;body&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
            &lt;span class=&quot;k&quot;&gt;except&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;Exception&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ex&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;logging&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;debug&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;ex&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
                &lt;span class=&quot;nb&quot;&gt;super&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;do_GET&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This next part is where things get a little tricky. This class builds upon Python’s
standard library implementation for the &lt;a href=&quot;https://docs.python.org/3/library/http.server.html#http.server.SimpleHTTPRequestHandler&quot;&gt;http.server.SimpleHTTPRequestHandler&lt;/a&gt; to
create a richer interface. By default, the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;SimpleHTTPRequestHandler&lt;/code&gt; will create
the directory browser view that is familiar to anyone who has tried running
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;python -m http.server&lt;/code&gt; in their terminal (go try now if you haven’t XD). But, in
biloba.py I’d like to override that action when I see a request for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/&lt;/code&gt;
or any other URL that looks like a directory. I’m overriding the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;do_GET&lt;/code&gt; function
of the base class so that I can inspect every HTTP GET request that is received.&lt;/p&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;if / elif / else&lt;/code&gt; clause is where we choose to either send back the index page
if the request is for the root, or ignore if a request for a favicon, and lastly
try to open the URL as a file. There is also some logic to pull out the line
number parameter, if it exists, and then add the highlighted line to the code
file rendered template. If all else fails, or there is an exception, this function
hands control over to the parent’s implementation because it has much better support
for errors and erroneous input.&lt;/p&gt;

&lt;h3 id=&quot;main&quot;&gt;Main&lt;/h3&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;main&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;filename&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;fp&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;open&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;filename&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;report&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;json&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;load&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;fp&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;

    &lt;span class=&quot;n&quot;&gt;suites&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[]&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;suite&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;enumerate&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;try&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;newsuite&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Suite&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;suite&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;logging&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;info&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sa&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;created suite for &lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;newsuite&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;suites&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;append&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;newsuite&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;except&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;Exception&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;logging&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;error&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sa&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;error processing suite at index &lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
            &lt;span class=&quot;k&quot;&gt;continue&lt;/span&gt;

    &lt;span class=&quot;n&quot;&gt;body&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;&apos;&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;suites&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sorted&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;suites&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;key&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;lambda&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;s&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;s&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;description&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;suite&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;suites&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;suite&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;reports&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;k&quot;&gt;continue&lt;/span&gt;

        &lt;span class=&quot;n&quot;&gt;body&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+=&lt;/span&gt; &lt;span class=&quot;sa&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&amp;lt;h1&amp;gt;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;suite&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;description&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&lt;/span&gt;

        &lt;span class=&quot;n&quot;&gt;body&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;&amp;lt;ul&amp;gt;&apos;&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;report&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;suite&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;reports&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;body&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+=&lt;/span&gt; &lt;span class=&quot;sa&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&amp;lt;li&amp;gt;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;hierarchy&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;lt;ul&amp;gt;&apos;&lt;/span&gt;
            &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;nodetype&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;body&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+=&lt;/span&gt; &lt;span class=&quot;sa&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&amp;lt;li&amp;gt;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;nodetype&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt; &lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;text&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;lt;/li&amp;gt;&apos;&lt;/span&gt;
            &lt;span class=&quot;k&quot;&gt;else&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;body&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+=&lt;/span&gt; &lt;span class=&quot;sa&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&amp;lt;li&amp;gt;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;text&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;lt;/li&amp;gt;&apos;&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;body&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+=&lt;/span&gt; &lt;span class=&quot;sa&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&amp;lt;li&amp;gt;&amp;lt;a href=&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;filename&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;?linenumber=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;linenumber&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;#highlighted-test&quot; target=&quot;_blank&quot;&amp;gt;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;filename&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;@&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;linenumber&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;lt;/a&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&amp;lt;/li&amp;gt;&apos;&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;body&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;&amp;lt;/ul&amp;gt;&apos;&lt;/span&gt;

    &lt;span class=&quot;k&quot;&gt;global&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;index&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;index&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;html_template&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;format&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;body&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;body&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;


    &lt;span class=&quot;n&quot;&gt;server_address&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;127.0.0.1&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;8080&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;httpd&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;http&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;server&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;HTTPServer&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;server_address&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;BilobaHttpRequestHandler&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;try&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;print&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;serving at http://127.0.0.1:8080/&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;httpd&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;serve_forever&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;except&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;KeyboardInterrupt&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;logging&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;warning&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Keyboard interrupt received, exiting...&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;sys&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;exit&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I like to build my Python applications with a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;main&lt;/code&gt; function, mainly to help
me remember where the whole thing starts. In this case, the main does a couple
things. First it tries to process the file name it is given as a JSON file and
create code versions of the suites and reports that exist within it, also sorting
these by name.&lt;/p&gt;

&lt;p&gt;Next, it creates the HTML for the index page, with its list of test names and
links to the individual code files.&lt;/p&gt;

&lt;p&gt;Lastly, it starts Python’s standard HTTP server using the custom handler class
as the default handler. This server is not meant for production use cases, but
running it locally in development is perfectly fine for my use case. I also
wrap this command with an exception handler to make the eventual “control-c” quit
a little more tidy.&lt;/p&gt;

&lt;h3 id=&quot;boilerplate&quot;&gt;Boilerplate&lt;/h3&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;__name__&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&apos;__main__&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;parser&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;argparse&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;ArgumentParser&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;description&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&quot;Reformat info from a Ginkgo test report JSON file&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;parser&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;add_argument&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;filename&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;help&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;the json file to process&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;parser&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;add_argument&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;--debug&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;action&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;store_true&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;help&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&apos;turn on debug logging&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;args&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;parser&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;parse_args&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;args&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;debug&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;logging&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;basicConfig&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;level&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;logging&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;DEBUG&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;main&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;args&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;filename&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Finally comes the Python boilerplate we are all familiar with when writing
application start code. I also tend to add my argument configuration stuff in
these wrappers, and then pass in the arguments I need to the main function.&lt;/p&gt;

&lt;h2 id=&quot;whats-next-for-the-tiny-bilobapy&quot;&gt;What’s next for the tiny biloba.py&lt;/h2&gt;

&lt;p&gt;I fixed one minor thing along the way, which turned out to be a 2 line change,
but otherwise I’m actively resisting putting more time in on it. I’m not sure
I will need it again, although if I do then I will probably add a little
styling to the code pages in the form of line numbers and maybe a little background color.&lt;/p&gt;

&lt;p&gt;Another thought I had was to add the ability for biloba.py to run the ginkgo
command and harvest the output to a temporary file. I’m not quite sure if that
would be useful, but I think if I start to do more of these “grep” style runs
then I might add that.&lt;/p&gt;

&lt;p&gt;I was super stoked with building this little application, it really turned what
looked like several mountains of work into something that was very manageable.
The power of modern tools like Python, and many other languages which I could have
used, has really amazed me over the years. I encourage every out there to grow
their tool box, whether with Python, another language, or even an entirely different
piece of software all together. If you &lt;strong&gt;are&lt;/strong&gt; looking for some place to learn Python, check out
&lt;a href=&quot;https://ocw.mit.edu/courses/6-0001-introduction-to-computer-science-and-programming-in-python-fall-2016/&quot;&gt;MIT’s Introduction to Computer Science and Programming in Python&lt;/a&gt;.
It’s a free course with videos and assignment material, it does use an older
version of Python but all the core principles are still useful. I hope you get
out there and have some fun building your tool box out, and as always
happy hacking =)&lt;/p&gt;
</description>
				<pubDate>Sun, 18 Dec 2022 00:00:00 +0000</pubDate>
				<link>https://notes.elmiko.dev/2022/12/18/why-i-keep-python-in-the-tool-box.html</link>
				<guid isPermaLink="true">https://notes.elmiko.dev/2022/12/18/why-i-keep-python-in-the-tool-box.html</guid>
			</item>
		
	</channel>
</rss>
