I'm using elastic-package stack up -v to boot the stack for testing purposes.
While working with integrations I noticed that the integration I've been working on was already installed across stack restarts.
My workflow is running (in shell A):
integrations $ elastic-package stack up -v
Then develop in shell B running:
integrations/package/gcp $ elastic-package build
integrations/package/gcp $ elastic-package stack up --services package-registry -d
To restart the stack I'm terminating the up process in shell a with CTRL-c the starting it again.
Once I install the gcp package though, it's reported as installed across restarts. This suggest the cluster state is being persisted.
This behaviour is at best counter intuitive, and I've not been able to understand which docker feature is producing it. The services do not have a volume attached for persistence, but running elastic-package stack down destroy the state, so the hypothesis is that it happens because volumes are destroyed by that command; see
|
downOptions := compose.CommandOptions{ |
|
Env: append(appConfig.StackImageRefs(options.StackVersion).AsEnv(), options.Profile.ComposeEnvVars()...), |
|
// Remove associated volumes. |
|
ExtraArgs: []string{"--volumes"}, |
|
} |
By not using -d this behaviour is more visible because to stop containers is enough to CTRL-c the up process.
When using -d the easiest way to stop the stack is elastic-package stack down; using this workflow this behaviour is not observed.
OS: elementary OS 5.1.7 Hera (based on Ubuntu 18.04.4 LTS)
docker version:
Client: Docker Engine - Community
Version: 20.10.8
API version: 1.41
Go version: go1.16.6
Git commit: 3967b7d
Built: Fri Jul 30 19:54:08 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.8
API version: 1.41 (minimum version 1.12)
Go version: go1.16.6
Git commit: 75249d8
Built: Fri Jul 30 19:52:16 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.9
GitCommit: e25210fe30a0a703442421b0f60afac609f950a3
runc:
Version: 1.0.1
GitCommit: v1.0.1-0-g4144b63
docker-init:
Version: 0.19.0
GitCommit: de40ad0
docker-compose version:
docker-compose version 1.28.6, build unknown
docker-py version: 4.4.4
CPython version: 3.8.9
OpenSSL version: OpenSSL 1.1.1k 25 Mar 2021
I'm using
elastic-package stack up -vto boot the stack for testing purposes.While working with integrations I noticed that the integration I've been working on was already installed across stack restarts.
My workflow is running (in shell A):
Then develop in shell B running:
To restart the stack I'm terminating the
upprocess in shell a withCTRL-cthe starting it again.Once I install the
gcppackage though, it's reported as installed across restarts. This suggest the cluster state is being persisted.This behaviour is at best counter intuitive, and I've not been able to understand which docker feature is producing it. The services do not have a volume attached for persistence, but running
elastic-package stack downdestroy the state, so the hypothesis is that it happens because volumes are destroyed by that command; seeelastic-package/internal/stack/compose.go
Lines 100 to 104 in 9b5a0fa
By not using
-dthis behaviour is more visible because to stop containers is enough toCTRL-ctheupprocess.When using
-dthe easiest way to stop the stack iselastic-package stack down; using this workflow this behaviour is not observed.OS:
elementary OS 5.1.7 Hera(based onUbuntu 18.04.4 LTS)docker version:
docker-compose version: