virtualization
Fun with Containers -Procern Blog Featured Image

Fun with Containers

Containers – What are they? First, let’s begin with what containers are.  There are a number of mature container technologies in use today, but Docker has been the long time leader.  To many, the name “Docker” is synonymous with “container,” so looking at their explanation would be a good idea. “A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.” Even among IT professionals, there seems to be some confusion for people who haven’t actually used this technology.  Most of us have heard of containers. We may have them powering applications in the workplace, yet still don’t really understand them.  People that have mostly wrapped their heads around what virtual machines are, might still have trouble understanding container technology. A Portable Application Virtual machines run on virtualized hardware, which can cause a pretty significant performance hit for RAM/CPU/Disk/ and other virtualized resources.  An operating system needs to be installed on the virtual machine. This can take up a large amount of disk space and system resources used by the operating system.  This adds up quickly, especially with a lot of virtual machines. Operating systems also need to boot up before they can run applications.  This can introduce quite a bit of delay.  Containers simply put just run applications on a virtual operating system.  They don’t need to boot up, and can be spun up in seconds. This still may be about as clear as mud for some, so I’ll break this down to a very basic level.  A container is just a portable application.  That might be a bit basic of an explanation, but it is still pretty accurate in my opinion. A different use case for containers Designing microservices, that run on multiple clouds, at scale, and orchestrated by something like Kubernetes is outside the scope of what I’d like to cover in a blog post.  There is also a lot of material out there on this topic.  Instead, I’ll propose a more obscure use case – running legacy software. It isn’t pretty, but the reality is that sometimes legacy systems are still a thing.  Some easy examples to pick on are legacy PBX(phone system) applications or building control applications.  Those don’t normally generate revenue, but losing control of your phone system or HVAC system could definitely be an issue. A more extreme example would be revenue generating, line of business applications that just have no suitable replacement.  Maybe there is a suitable replacement, but the cost is just too prohibitively expensive when your old system still works great.  Perhaps there are licensing cost changes, or having to buy large quantities of new IP phones to run on a modern PBX just isn’t in the budget.  Some of these just cannot run on anything past End Of Life Windows versions such as Windows 2003, 2000, or even NT(YIKES!!!).  The software vendor that designed these might have gone out of business, and these systems just never got around to being replaced because they just work and generate revenue. WINE in a Linux container Among many other problems, running end of life software, on end of life operating systems, is a HUGE security issue.  It is difficult, if not impossible to prevent an attack using known exploits that are simply un-patchable. What is a better way?  You can run Windows applications using WINE (originally a backronym for “Wine Is Not an Emulator”), in a Linux container!  This solves a lot of security issues related to the operating system.  The “What happens if my 15 year old server dies?” also is solved.  If you really wanted to, you could even put that antiquated application in the cloud. Configuring WINE is also out of scope for what I’d like to cover in this post, but there is plenty information out there.  Rather than pick on out of business software vendors, I “containerized” a few Windows applications that are freely available which I have actually encountered being used for actual business use.  We can just pretend that they are no longer supported, and have no suitable replacement for demo purposes. These applications are: PingInfoView  (a ping monitoring tool) notepad++ (a text editor used by some systems/network/software engineers) To tie it all together visually, here is a screenshot demonstrating these running on my CentOS Linux laptop: To demonstrate that these are actually running as containers, please take notice of the container ID listed at the top of the application window(8b5f36fff598 and 5cd664be8a7b).  These are listed in the output of “docker ps” and shown in the filtered output of the Linux process monitor “top.”  These could be easily moved to another machine running different distribution of Linux, and perhaps into a random server on your favorite cloud host. I hope you enjoyed this example of a fun, not so common container use case.  Need help designing your infrastructure to power your applications?  The friendly engineers at ProCern have the expertise to help.  Contact us today!

virtualization
person in front of a screen of hanging technology code

The Hidden Costs of Virtualization: How to Optimize Performance Without Overspending

 Virtualization promised to make IT simpler and more flexible—and in many ways, it has. But while it can streamline operations, it can also quietly drain your budget if you’re not watching closely. Most organizations don’t realize how many hidden costs are baked into their virtual environments until they’re deep in technical debt. Let’s take a closer look at where those costs come from—and how to course-correct. The Hidden Costs of Virtualization Unmanaged storage growth: VMs generate logs, snapshots, backups, and disk images that are often stored indefinitely. Storage fills up with redundant or outdated data, requiring more physical drives or cloud storage capacity, which tends to go unnoticed until costs spike. Inefficient licensing: Many software licenses are based on the number of VMs or CPU cores. Spinning up unnecessary VMs or assigning more cores than needed can bump you into higher pricing tiers or require more licenses than needed. Unoptimized networking configurations: Virtual switches, firewalls, and routing rules are easier to misconfigure in software-defined environments. Poorly configured virtual networks can cause inefficient traffic flow or duplicate services—leading to performance issues, troubleshooting costs, and wasted bandwidth. Resource-hungry AI workloads: AI models demand significant compute and memory—often running inside VMs or containers. If they aren’t isolated or right-sized, they use more resources than necessary, leaving less for other VMs. That can make it seem like you need more infrastructure than you actually do—driving up costs. Resource waste: Overprovisioning VMs with extra resources or letting them run idle after use results in unnecessary spending on compute, memory, and storage. These wasted resources also cause delays and inefficiencies that drive up operational costs. VM sprawl and lack of governance: It’s easy to create a new VM. Too easy, in fact. Without naming conventions, lifecycle policies, or usage tracking, environments can quickly become chaotic. That chaos creates management overhead, slows down troubleshooting, and makes it harder to pinpoint where waste is happening. Edge deployments without visibility: Virtualization has made it easier to deploy workloads at remote or edge locations—but managing them centrally isn’t always part of the setup. Without consistent policies and visibility across environments, you run the risk of redundant infrastructure, inconsistent licensing, and excess resource consumption. How to optimize without overspending Keeping virtualization costs in check doesn’t mean scaling back. It means managing smarter. Here are a few ways to do that: Audit regularly: Routinely check for idle or unnecessary virtual machines and shut them down to reclaim resources. Right-size your VMs: Match the size of your VMs to actual workloads to avoid over-provisioning. Automate where it counts: Use automation tools to manage VM lifecycles and optimize resource allocation. Simplify management: Streamline your virtualization management with unified tools that reduce complexity and save time. Consolidate workloads: Combine smaller workloads onto fewer VMs to maximize efficiency and reduce overhead. Monitor performance: Regularly assess VM performance metrics to ensure resources are allocated optimally and adjust as needed. Smarter virtualization starts with the right tools Virtualization can help your business run leaner and more efficiently—but only if you’ve got the right infrastructure and visibility to manage it. With Hewlett Packard Enterprise Virtualization Solutions, ProCern can help you streamline operations, optimize resources, and ensure your virtual environment is both cost-effective and high-performing.