A Docker sandbox gives you a safe, disposable environment to experiment, build, or let automated tools run without risking your real system. It’s becoming an essential part of modern development workflows, especially as coding agents and cloud‑based tooling evolve. Docker
What a Docker sandbox actually is
A Docker sandbox is an isolated execution environment that behaves like a lightweight, temporary machine. It lets you run containers, install packages, modify configurations, and test ideas freely—while keeping your host system untouched. Modern implementations often use microVMs to provide stronger isolation than traditional containers, giving you the flexibility of a full system with the safety of a sealed box.
Key characteristics include:
Isolation — Your experiments can’t affect your host OS.
Disposability — You can reset or destroy the environment instantly.
Reproducibility — Every sandbox starts from a known, clean state.
Autonomy — Tools and agents can run unattended without permission prompts.
Why Docker sandboxes matter now
The rise of coding agents and automated development tools has created new demands. These agents need to run commands, install dependencies, and even use Docker themselves. Traditional approaches—like OS‑level sandboxing or full virtual machines—either interrupt workflows or are too heavy. Docker sandboxes solve this by offering:
A real system for agents to work in
The ability to run Docker inside the sandbox
A consistent environment across platforms
Fast resets for iterative development
This makes them ideal for AI‑assisted coding, CI/CD experimentation, and secure testing.
Where you can use Docker sandboxes today
Several platforms now offer browser‑based or cloud‑hosted Docker sandboxes, making it easy to experiment without installing anything locally.
Docker Sandboxes (Docker Inc.) — Purpose‑built for coding agents, using microVM isolation.
CodeSandbox Docker environments — Interactive online playgrounds where you can fork, edit, and run Docker‑based projects directly in the browser. CodeSandbox
LabEx Online Docker Playground — A full Docker terminal running on Ubuntu 22.04, ideal for learning and hands‑on practice, especially as Play with Docker winds down. LabEx
These platforms remove setup friction and let you focus on learning, testing, or building.
How developers typically use Docker sandboxes
A Docker sandbox fits naturally into several workflows:
Learning Docker — Practice commands, build images, and explore networking without installing anything.
Testing risky changes — Try new packages, configs, or scripts without fear of breaking your machine.
Running coding agents — Give AI tools a safe environment to operate autonomously.
Prototyping microservices — Spin up isolated services quickly and tear them down just as fast.
Teaching and workshops — Provide a consistent environment for all participants.
A non‑obvious advantage
Docker sandboxes aren’t just about safety—they’re about speed of iteration. Because they reset instantly and start from a known state, they eliminate the “works on my machine” problem and make experimentation frictionless. This is especially powerful when combined with automated tools or when onboarding new team members.
Closing thought
Docker sandboxes are becoming a foundational tool for modern development—combining safety, speed, and autonomy in a way that traditional containers or VMs alone can’t match. They’re especially valuable if you’re experimenting with AI‑driven coding tools or want a clean, reproducible environment for testing. Important:Use Docker Sandboxes for testing.
The Rise of Free Hardened Docker Images: A New Security Baseline for Developers and DevOps
Containerization has become the backbone of modern software delivery. But as adoption has exploded, so has the attack surface. Vulnerable base images, outdated dependencies, and misconfigured runtimes have quietly become some of the most common entry points for supply‑chain attacks.
The industry has been asking for a better baseline—something secure by default, continuously maintained, and frictionless for teams to adopt. And now we’re finally seeing it: free hardened Docker images becoming widely available from major vendors and open‑source security communities.
This shift isn’t just a convenience upgrade. It’s a fundamental change in how we think about container security.
Why Hardened Images Matter More Than Ever
A “hardened” image isn’t just a slimmer version of a base OS. It’s a container that has been:
Stripped of unnecessary packages
Fewer binaries = fewer vulnerabilities.
Built with secure defaults
Non‑root users, locked‑down permissions, and minimized attack surface.
Continuously scanned and patched
Automated pipelines ensure CVEs are fixed quickly.
Cryptographically signed
So you can verify provenance and integrity before deployment.
Aligned with compliance frameworks
CIS Benchmarks, NIST 800‑190, and other standards are increasingly baked in.
For developers, this means fewer surprises during security reviews. For DevOps teams, it means fewer late‑night patch cycles and fewer emergency rebuilds.
What’s New About the Latest Generation of Free Hardened Images
The newest wave of hardened images goes far beyond the “minimal OS” approach of the past. Here’s what’s changing:
Hardened Language Runtimes
We’re seeing secure-by-default images for:
Python
Node.js
Go
Java
.NET
Rust
These images often include:
Preconfigured non‑root users
Read‑only root filesystems
Mandatory access control profiles
Reduced dependency trees
Automated SBOMs (Software Bills of Materials)
Every image now ships with a machine‑readable SBOM.
This gives you:
Full visibility into dependencies
Faster vulnerability triage
Easier compliance reporting
SBOMs are no longer optional—they’re becoming a standard part of secure supply chains.
Built‑in Image Signing and Verification
Tools like Sigstore Cosign, Notary v2, and Docker Content Trust are now integrated directly into image pipelines.
This means you can enforce:
“Only signed images may run” policies
Zero‑trust container admission
Immutable deployment guarantees
Continuous Hardening Pipelines
Instead of waiting for monthly rebuilds, hardened images are now updated:
Daily
Automatically
With CVE‑aware rebuild triggers
This dramatically reduces the window of exposure for newly discovered vulnerabilities.
Introduction
Docker Desktop 4.39.0 is here, bringing a host of new features designed to enhance developer productivity, streamline workflows, and improve security. This release continues Docker’s commitment to providing efficient, secure, and reliable tools for building, sharing, and running applications.
Key Features in Docker Desktop 4.39.0
Docker AI Agent with Model Context Protocol (MCP) and Kubernetes Support
The Docker AI Agent, introduced in previous versions, has been upgraded to support MCP and Kubernetes. MCP enables AI-powered applications to access external data sources, perform operations with third-party services, and interact with local filesystems. Kubernetes support allows the AI Agent to manage namespaces, deploy services, and analyze pod logs.
General Availability of Docker Desktop CLI
The Docker Desktop CLI is now officially available, offering developers a powerful command-line interface for managing containers, images, and volumes. The new docker desktop logs command simplifies log management.
Platform Flag for Multi-Platform Image Management
Docker Desktop now supports the –platform flag on docker load and docker save commands, enabling seamless import and export of multi-platform images.
Enhanced Containerization Across Programming Languages
The Docker AI Agent can now containerize applications written in JavaScript, Python, Go, C#, and more. It analyzes projects to identify services, programming languages, and package managers, making containerization effortless.
Security Improvements
Docker Desktop 4.39.0 addresses critical vulnerabilities, such as CVE-2025-1696, ensuring proxy authentication credentials are no longer exposed in plaintext.
Developer Productivity: The upgraded Docker AI Agent simplifies container management and troubleshooting, saving developers time and effort.
Multi-Platform Flexibility: The –platform flag ensures compatibility across diverse environments, making Docker Desktop a versatile tool for modern development.
Enhanced Security: By addressing vulnerabilities, Docker Desktop 4.39.0 reinforces its position as a secure platform for application development.
Conclusion
Docker Desktop 4.39.0 is a significant step forward, offering smarter tools, improved security, and greater flexibility for developers. Whether you’re managing Kubernetes clusters or containerizing applications, this release has something for everyone.
People, Process, and Technology to continually provide value to customers.
While adopting DevOps practices automates and optimizes processes through technology, it all starts with the culture inside the organization—and the people who play a part in it. The challenge of cultivating a DevOps culture requires deep changes in the way people work and collaborate. But when organizations commit to a DevOps culture, they can create the environment for high-performing teams to develop.
My name is James van den Berg and I’m a MVP in Cloud and Datacenter Management on my DevOps journey as an IT Infrastructure Guy managing datacenters on-prem and in the Microsoft Azure Cloud. Today It’s not only a Virtual Machine or a Website to deploy for your customers, it’s much more then that like :
Time to market, deploy your solution fast without waiting on dependencies because you automated your process with a CI CD Pipeline.
Security and Monitoring to keep you in Controle.
Working together with different Teams who are each responsible for a part of the solution.
Start Creating Azure Kubernetes Cluster for your Containers.
Managed Azure Kubernetes Service (AKS) makes deploying and managing containerized applications easy. It offers serverless Kubernetes, an integrated continuous integration and continuous delivery (CI/CD) experience, and enterprise-grade security and governance. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you. The Kubernetes masters are managed by Azure. You only manage and maintain the agent nodes. As a managed Kubernetes service, AKS is free – you only pay for the agent nodes within your clusters, not for the masters. In the following steps you can see the different ways for creating Azure Kubernetes Cluster via the Azure Portal, or via Azure Cloud Shell, or via Azure Resource Template. When the Microsoft Azure Kubernetes Cluster is running, then I will explain the different ways for deploying container workloads on AKS. When your workload is running on Azure Kubernetes Services, you also have to monitor your Container workloads with Azure Monitor Container Insights to keep in Controle. Let’s start with installing Azure Kubernetes Services (AKS)
Installing Azure Kubernetes Cluster via the Portal.
To begin you need of course a Microsoft Azure Subscription and you can start for free here
Basics information of the Azure Kubernetes Cluster
To Create the Azure Kubernetes Cluster, you have to follow these steps and type the right information in the Portal:
Basics
Scale
Authentication
Networking
Monitoring
Tags
Review + Create
At the basics screen you select the right Azure Subscription and the Resource Group. You can create a New Resource Group or one you already made.
At Cluster details, you give your Cluster a name and select the Kubernetes version.
Here you select the Kubernetes Node size for your Container workload and the number of nodes.
You can start a Cluster already with One node, but choose to start with the right size for your workloads.
When you click on Change size, you can choose your nodes to do the job. 😉
Select the right Size node
Then we go to step 2 and that is Scale.
2. Scale options in Azure Kubernetes Cluster
Here you have two options :
Virtual Nodes
VM Scale sets (Preview)
To quickly deploy workloads in an Azure Kubernetes Service (AKS) cluster, you can use virtual nodes. With virtual nodes, you have fast provisioning of pods, and only pay per second for their execution time. In a scaling scenario, you don’t need to wait for the Kubernetes cluster autoscaler to deploy VM compute nodes to run the additional pods. Virtual nodes are only supported with Linux pods and nodes. More information here about Virtual Nodes
To create an AKS cluster that can use multiple node pools, first enable two feature flags on your subscription. Multi-node pool clusters use a virtual machine scale set (VMSS) to manage the deployment and configuration of the Kubernetes nodes. With this Preview feature you can run Linux Containers and Windows Containers on the same Cluster. More information here about VM Scale sets (Preview)
3, Authentication
The service principal is needed to dynamically create and manage other Azure resources such as an Azure load balancer or container registry (ACR). To interact with Azure APIs, an AKS cluster requires an Azure Active Directory (AD) service principal. More information about the Service Principal can be found here
Azure Kubernetes Service (AKS) can be configured to use Azure Active Directory (Azure AD) for user authentication. In this configuration, you can sign in to an AKS cluster by using your Azure AD authentication token.
Cluster administrators can configure Kubernetes role-based access control (RBAC) based on a user’s identity or directory group membership. More information about RBAC for AKS
4. Networking
Configuring the virtual Networks for your Azure Kubernetes Cluster is important for the right IP range but later on also for the Network Security Groups (NSG).
Here you see an example of the Kubernetes NSG which is connected to the Internet by Default after installation, you can deep dive into security but be careful which settings you do here because Microsoft resources must have access to service the Azure Kubernetes Cluster.
NSG created after installation is finished
NSG Rule set Inbound and outbound
In a container-based microservices approach to application development, application components must work together to process their tasks. Kubernetes provides various resources that enable this application communication. You can connect to and expose applications internally or externally. To build highly available applications, you can load balance your applications. More complex applications may require configuration of ingress traffic for SSL/TLS termination or routing of multiple components. For security reasons, you may also need to restrict the flow of network traffic into or between pods and nodes.
Best practices for network connectivity and security in Azure Kubernetes Service (AKS):
Keep Azure Monitoring Enabled and Connect to your Log Analytics workspace or create a new workspace for Container monitoring of your Azure Kubernetes Cluster.
Azure Monitor for containers is a feature designed to monitor the performance of container workloads deployed to either Azure Container Instances or managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). Monitoring your containers is critical, especially when you’re running a production cluster, at scale, with multiple applications.
Azure Monitor for containers gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are written to the metrics store and log data is written to the logs store associated with your Log Analytics workspace.
6. Tags
When you build more Azure Kubernetes Clusters for different departments or teams you can TAG your Clusters for organizing your billing and security for example. Here you find more information about tagging.
After this you click on the last step Review and Create The Azure portal will do a validation of your Azure Kubernetes Cluster settings, and when it’s validated you hit Create. But when you want more Automation, you can download the JSON ARM template first and use that.
Installing Azure Kubernetes Cluster via Cloud Shell
Azure Cloud Shell AKS CLI
Azure hosts Azure Cloud Shell, an interactive shell environment that you can use through your browser. Cloud Shell lets you use either bash or PowerShell to work with Azure services. You can use the Cloud Shell pre-installed commands to run the code in this article without having to install anything on your local environment.
Here you see an Example of AKS CLI with Auto Scaler with max count of nodes 😉
Installing Azure Kubernetes Cluster via Template
Create Azure Kubernetes Cluster via Template in the Portal
Now you have your Microsoft Azure Kubernetes Cluster (AKS) running in the Cloud, you want to deploy your Container workloads on the Cluster. In the following steps you see different deployments.
Deploy Container workload with Azure DevOps Project
Deployment Center
First you select your repository where your source code is of your workload.
Set the information right and click Next.
Simple example Click Next
Create a Container Registry.
Building Pipeline with Azure DevOps.
Here you see the Building in Microsoft Azure DevOps.
Build, test, and deploy in any language, to any cloud—or on-premises. Run in parallel on Linux, macOS, and Windows, and deploy containers to individual hosts or Kubernetes.
Here you find all the information about Microsoft Azure DevOpsfor your workloads, code and Deployments.
Deploying Container workload completed with Azure DevOps.
Deploy Container Workloads via Visual Studio Code
When you download and install Visual Studio Code on your computer, you can install the Azure Kubernetes extension for VSCode.
Here you see Microsoft Visual Studio Code connected with my Azure subscription where my Azure Kubernetes Cluster is running. With the standard Helm Repository packages for deployment to your AKS Cluster. Here you see a WordPress yaml file which I deployed to the Kubernetes Cluster on Azure.
Just Select your Package and Install on Azure Kubernetes.
From here you can into the Container and read the logs.
I’m using Visual Studio Code a lot for Azure Kubernetes but also for Docker Containers and images.
Making Azure ARM JSON templates and this great for Infrastructure as Code.
Azure Monitoring with Container Insights
In One Dashboard you can see the Status of all your Clusters
Azure Monitor Container Insights Live View
Because we installed Azure Monitor for Containers on the Microsoft Azure Kubernetes Cluster, we can live see what is happening inside the Kubernetes Cluster with the containers. This is a great feature when you have a issue with a Container for troubleshooting fast and see what is happening.
Conclusion
Microsoft Azure Kubernetes Cluster is fast and easy to manage. You can upgrade your Cluster without downtime of your Container workload. With Azure Monitor for Containers you can see what’s happening inside the container and you can set alerts when something went wrong. This keeps you in Controle of the solution. With Deployment center alias Azure DevOps Projects you can deploy your workload via Azure DevOps Pipeline and work on versioning, testplans, Azure DevOps repo and work together with a Team on the following releases. Working with Azure Kubernetes Multi node pools with Linux and Windows on the same Cluster is possible. Try it yourself and start with a Proof of Concept for your Business.
Configure continuous integration (CI) and continuous delivery (CD) for your IoT Edge application with DevOps Projects. DevOps Projects simplifies the initial configuration of a build and release pipeline in Azure Pipelines.
In the following steps you can see how easy it is to build your Continuous integration and continuous deployment to Azure IoT Edge with DevOps Project :
Select Simple IoT
Click on Next.
From here you set your Azure DevOps organization to your Azure IoT Hub. Click on additional settings
In additional settings you can set :
Azure Resource Group
Location ( region)
Container Registry
Container Registry name
Container registry SKU
Container Location
IoT Hub of Edge Devices
IoT Hub Location
Select Container Registry Plan
Azure Container Registry allows you to store images for all types of container deployments including DC/OS, Docker Swarm, Kubernetes, and Azure services such as App Service, Batch, Service Fabric, and others. Your DevOps team can manage the configuration of apps isolated from the configuration of the hosting environment.
More information about Azure Container Registry and pricing
Azure DevOps Project will do the rest of the deployment.
Of course Infrastructure as Code (IaC) is possible by ARM JSON Template.
Save the template for later.
here you got your ARM Templates.
Later you will see when you complete the deployment, that your JSON ARM template is in Azure DevOps Repo.
You can connect your Azure DevOps Repo via the portal but also via Visual Studio and Visual Studio Code.
The resources coming into myiotpipeline-rg
MyIOTPipeline-IoTHub is created.
MyIOTPipelineACR Container Registry is created.
MyIOTPipeline with Azure DevOps is created 🙂
Your Continuous integration and continuous deployment to Azure IoT Edge is deployed and active. Now you have your Azure Pipeline in place to continuously update your IoT Device App. From here you can go to Azure DevOps Project Homepage.
Via Agent phase you can see all the jobs of the deployment.
Azure DevOps Pipeline Release
here we have Azure DevOps Repos
Azure DevOps Services includes free unlimited private Git repos, so Azure Repos is easy to try out. Git is the most commonly used version control system today and is quickly becoming the standard for version control. Git is a distributed version control system, meaning that your local copy of code is a complete version control repository. These fully functional local repositories make it easy to work offline or remotely. You commit your work locally, and then sync your copy of the repository with the copy on the server.
Git in Azure Repos is standard Git. You can use the clients and tools of your choice, such as Git for Windows, Mac, partners’ Git services, and tools such as Visual Studio and Visual Studio Code.
All the Azure Resources for the IoT Edge Pipeline with Azure DevOps.
When you have your Azure DevOps Pipeline with IoT Edge devices running, you can monitor your pipeline with Analytics inside Azure DevOps.
Click Next.
Click on Install Analytics.
Select the right Azure DevOps Organization for your IoT Edge Pipeline.
Done !
Analytics is now active, you can make automated test plans in Azure DevOps and see the results via Analytics.
Azure DevOps Overview Dashboard.
There are a lot of predefined Analytics Views for you shared.
An Analytics view provides a simplified way to specify the filter criteria for a Power BI report based on the Analytics Service data store. The Analytics Service provides the reporting platform for Azure DevOps. More information about Analytics in Azure DevOps here
Easy to start with Powerbi and Azure DevOps Connector.
Planned manual testing
Plan, execute, and track scripted tests with actionable defects and end-to-end traceability. Assess quality throughout the development lifecycle by testing your desktop or web applications.
More information about making your testplan for your IoT Edge Devices Azure DevOps Pipeline
Conclusion :
When you connect Microsoft Azure IoT Edge – HUB with your Internet of Things Devices and combine it with Microsoft Azure DevOps Team to develop your Azure IoT Pipeline, then you are in fully control of Continuous integration and continuous deployment to Azure IoT Edge. From here you can make your innovations and Intelligent Cloud & Edge with Artificial Intelligence and Machine Learning to your Devices. You will see that this combination will be Awesome for HealthCare, Smart Cities, Smart Buildings, Infrastructure, and the Tech Industry.
In this Microsoft article, you learn how to use the built-in Azure IoT Edge tasks for Azure Pipelines to create two pipelines for your IoT Edge solution. The first takes your code and builds the solution, pushing your module images to your container registry and creating a deployment manifest. The second deploys your modules to targeted IoT Edge devices.
First of all Thank you for following me and Sharing Microsoft Cloud and Datacenter Management content on Social Media 🙂 Sharing & Learning Together is Better.
Welcome 577 New Followers on Twitter of the 5904 Followers 🙂
More then 2.807.000 Tweet impressions in One year !
Started with Friday is MVPbuzz Day for Education to get Azure Cloud in the Classroom, working together with Teachers and Students in my Free time.
Working with Microsoft Learn in Teams for the Students.
Meetings and Speaking for Education, all about Azure and AzureStack Technologies.
Conferences, like the Global MVP Summit 2018, DevOps Amsterdam, Community Group meetings.
Microsoft Ignite, Microsoft Build, Microsoft Connect events.
Almost every week Microsoft Product Group Intervention (PGI) sessions Online.
Sharing the News every Day via Twitter, Facebook, LinkedIn, Microsoft Tech Community, Blog
But what is coming in 2019 ?
Rocking with Azure in the Classroom !
I will continue every day sharing knowledge with the Community and continue my Free work on MVPbuzz Friday for Education to get Azure Cloud Technology in the Classroom for Teachers and Students.
The trend I see for 2019 is more Infrastructure and Security by Code with Microsoft Azure DevOps
and of course you have to be in Control with Microsoft Azure Monitor
I will write a blogpost in January 2019 about Microsoft Azure Hub-Spoke model by Enterprise Design 4 of 4: Optimize your Azure Workload.
More Items in 2019 to come :
Microsoft Azure Security Center for Hybrid IT
Windows Server 2019 in combination with Azure Cloud Services.
More on Containers in the Cloud
Azure Stack and ASDK
Integration with Azure Cloud.
API Management
Azure DevOps Pipelines and Collabration
Azure IoT for Smart Cities and Buildings combined with AI Technology
2019 will be a Great year again with New Microsoft Technologies and Features for your business.
Azure DevOps Services is a cloud service for collaborating on code development. It provides an integrated set of features that you access through your web browser or IDE client. The features are included, as follows:
Git repositories for source control of your code
Build and release services to support continuous integration and delivery of your apps
Agile tools to support planning and tracking your work, code defects, and issues using Kanban and Scrum methods
Many tools to test your apps, including manual/exploratory testing, load testing, and continuous testing
Highly customizable dashboards for sharing progress and trends
Built-in wiki for sharing information with your team
The Azure DevOps ecosystem also provides support for adding extensions and integrating with other popular services, such as: Campfire, Slack, Trello, UserVoice, and more, and developing your own custom extensions.
Collaborative software development tools for the entire team
Previously known as Team Foundation Server (TFS), Azure DevOps Server is a set of collaborative software development tools, hosted on-premises. Azure DevOps Server integrates with your existing IDE or editor, enabling your cross-functional team to work effectively on projects of all sizes.