feat: support flavours in services, specially in the elastic-agent#1162
feat: support flavours in services, specially in the elastic-agent#1162mdelapenya merged 21 commits intoelastic:masterfrom
Conversation
We will need another abstraction to represent the Docker client operations, as it's clear what is a deployment and what is an operation in the deployment. Maybe a Client struct for each provider will help out in differenciate it
|
Ups, did not rebased first. Will send the merge resolving conflicts |
* master: feat: simplify the initialisation of versions (elastic#1159) chore(mergify): delete upstream branches on merge (elastic#1158)
💚 Build Succeeded
Expand to view the summary
Build stats
Test stats 🧪
Trends 🧪💚 Flaky test reportTests succeeded. Expand to view the summary
Test stats 🧪
|
|
This looks awesome, I did want to point out that some of this code will conflict with the work im doing on getting the installers cleaned up. Specifically in the areas of executing and adding files to the environment as I pass in the deployers ExecIn and Addfiles functions into the calling Mount function when loading the installers https://github.com/elastic/e2e-testing/compare/feat-installer-rework |
| Short: `Deploys a ` + srv + ` service`, | ||
| Long: `Deploys a ` + srv + ` service, adding it to a running profile, identified by its name`, | ||
| Run: func(cmd *cobra.Command, args []string) { | ||
| serviceManager := compose.NewServiceManager() |
There was a problem hiding this comment.
Rather than calling deploy.NewServiceManager we could initialize it like we do in the fleet tests that way you just use deployer.Add instead and it'll do the right thing no matter the provider (docker or k8s)
There was a problem hiding this comment.
If we do that we will be introducing a bug in the CLI side, as we have coupled the Bootstrap method with the Fleet profile. Maybe we can fix that in a separate PR.
There was a problem hiding this comment.
Well, indeed the bug is there if we decide to migrate the metricbeat test suite 😄
| } | ||
|
|
||
| // get container hostname once | ||
| hostname, err := docker.GetContainerHostname(containerName) |
There was a problem hiding this comment.
In my installer rework code, we don't need to query the hostname as they is done by inspecting the service further abstracting out connection information. We want to be able to just perform actions on a "service" and not have to be so explicit about how to access it as that is handled by the deployment and installer abstractions
| // we are using the Docker client instead of docker-compose because it does not support | ||
| // returning the output of a command: it simply returns error level | ||
| func (fts *FleetTestSuite) getContainerName(i installer.ElasticAgentInstaller, index int) string { | ||
| return fmt.Sprintf("%s_%s_%s_%d", i.Profile, i.Image, common.ElasticAgentServiceName, index) |
There was a problem hiding this comment.
Same thing here, in the other branch we use an Inspect function to populate the service metadata so that we can just reference that rather than trying to piece together the container name as these would differ between docker and k8s
| imageService := deploy.NewServiceRequest(common.ElasticAgentServiceName).WithFlavour(agentInstaller.Image) | ||
|
|
||
| if state == "started" { | ||
| return installer.SystemctlRun(profile, agentInstaller.Image, serviceName, "start") |
There was a problem hiding this comment.
In my branch these are actually abstracted out to the installer service as Start and Stop and is specific to the service we are installing (this could differ with other services that may not use systemctl)
adam-stokes
left a comment
There was a problem hiding this comment.
I think we should pull this in as is, and I can rework my installer code to make use of the new abstractions for services. There is some code in here that will change based on my work but I think it's mostly within the installer and interacting with the mounted services
| test: "curl -f http://localhost:5601/login | grep kbn-injected-metadata 2>&1 >/dev/null" | ||
| retries: 600 | ||
| interval: 1s | ||
| image: "docker.elastic.co/${kibanaDockerNamespace:-beats}/kibana:${kibanaVersion:-8.0.0-SNAPSHOT}" |
There was a problem hiding this comment.
I always wondered about this one :)
There was a problem hiding this comment.
Wrong copy&paste on my side for sure 🤦
I think I'm gonna merge it after the CI passes
💚 Build Succeeded
Expand to view the summary
Build stats
Test stats 🧪
Trends 🧪💚 Flaky test reportTests succeeded. Expand to view the summary
Test stats 🧪
|
* master: feat: support flavours in services, specially in the elastic-agent (elastic#1162)
…lastic#1162) * chore: move compose to deploy package * feat: use a ServiceRequest when adding services * feat: add service flavour support * chore: remove unused centos/debian services * fixup: add service flavour * chore: move docker client to the deploy package We will need another abstraction to represent the Docker client operations, as it's clear what is a deployment and what is an operation in the deployment. Maybe a Client struct for each provider will help out in differenciate it * chore: use ServiceRequest everywhere * chore: run agent commands with a ServiceRequest * chore: use ServiceRequest in metricbeat test suite * chore: pass flavours to installers * chore: add a step to install the agent for the underlying OS * chore: always add flavour * fix: use installer for fleet_mode when removing services at the end of the scenario * fix: update broken references in metricbeat test suite * fix: update broken references in helm test suite * fix: standalone does not have an installer * fix: use service instead of image to get a service request for the agent * feat: support for scaling services in compose * fix: run second agent using compose scale option * fix: update kibana's default Docker namespace
* chore: initialise timeout factor next to the declaration (#1118) * chore: initialise timeout factor on its own package * chore: reuse timeout factor from common * chore: remove unused code (#1119) * chore: remove unused code * chore: remove all references to fleet server hostname Because we assume it's a runtime dependency, provided by the initial compose file, we do not need to calculate service names, or URIs for the fleet-service endpoint. Instead, we assume it's listening in the 8220 port in the "fleet-server" hostname, which is accessible from the network created by docker-compose. * fix: use HTTP to connect to fleet-server * chore: remove fleet server policy code We do not need it anymore, as the fleet server is already bootstrapped * chore: remove all policies but system and fleet_server * Update policies.go * Update fleet.go * Update stand-alone.go Co-authored-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com> * Support multiple deployment backends (#1130) * Abstract out deployment Provides ability to plugin different deployment backends for use in testing. Current deployment backends supported are "docker" and "kubernetes" * remove unused import * remove unsetting of fleet server hostname as it's not needed * add deployer support to stand-alone * add elastic-agent to k8s deployment specs * Update internal/docker/docker.go Signed-off-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com> Co-authored-by: Manuel de la Peña <mdelapenya@gmail.com> * chore: abstract process checks to the deployer (#1156) * chore: abstract process checks to the deployer * chore: rename variable in log entry * docs: improve comment * fix: go-fmt * feat: simplify the initialisation of versions (#1159) * chore: use fixed version in shell scripts * chore: move retry to utils We could move it to its own package, but at this moment it's very small * chore: initialise stackVesion at one single place * chore: initialise agent version base at one single place * chore: initialise agent version at one single place * chore: reduce the number of requests to Elastic's artifacts endpoint * chore: rename AgentVersionBase variable to BeatVersionBase * chore: rename AgentVersion variable to BeatVersion * chore: use Beat version in metricbeat test suite * chore: check if the version must use the fallback after coming from a Git SHA * feat: support flavours in services, specially in the elastic-agent (#1162) * chore: move compose to deploy package * feat: use a ServiceRequest when adding services * feat: add service flavour support * chore: remove unused centos/debian services * fixup: add service flavour * chore: move docker client to the deploy package We will need another abstraction to represent the Docker client operations, as it's clear what is a deployment and what is an operation in the deployment. Maybe a Client struct for each provider will help out in differenciate it * chore: use ServiceRequest everywhere * chore: run agent commands with a ServiceRequest * chore: use ServiceRequest in metricbeat test suite * chore: pass flavours to installers * chore: add a step to install the agent for the underlying OS * chore: always add flavour * fix: use installer for fleet_mode when removing services at the end of the scenario * fix: update broken references in metricbeat test suite * fix: update broken references in helm test suite * fix: standalone does not have an installer * fix: use service instead of image to get a service request for the agent * feat: support for scaling services in compose * fix: run second agent using compose scale option * fix: update kibana's default Docker namespace * fix: there are 2 metricbeat instances * fix: wait for 2 filebeat instances * fix: fleet backend processes count * chore: use 2 instances for background processes Co-authored-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com>
…lastic#1162) * chore: move compose to deploy package * feat: use a ServiceRequest when adding services * feat: add service flavour support * chore: remove unused centos/debian services * fixup: add service flavour * chore: move docker client to the deploy package We will need another abstraction to represent the Docker client operations, as it's clear what is a deployment and what is an operation in the deployment. Maybe a Client struct for each provider will help out in differenciate it * chore: use ServiceRequest everywhere * chore: run agent commands with a ServiceRequest * chore: use ServiceRequest in metricbeat test suite * chore: pass flavours to installers * chore: add a step to install the agent for the underlying OS * chore: always add flavour * fix: use installer for fleet_mode when removing services at the end of the scenario * fix: update broken references in metricbeat test suite * fix: update broken references in helm test suite * fix: standalone does not have an installer * fix: use service instead of image to get a service request for the agent * feat: support for scaling services in compose * fix: run second agent using compose scale option * fix: update kibana's default Docker namespace
* Move kubernetes/kubectl/kind code to internal project layout (#1092) This is mainly a cleanup to keep all internal related code that could be reusable in our `internal` directory layout. Next steps would be to take what's in `internal/kubectl` and merge with this code. Signed-off-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com> * feat: bootstrap fleet-server for the deployment of regular elastic-agents (#1078) * chore: provide a fleet-server base image based on centos/debian with systemd * WIP * fix: remove duplicated fields after merge conflicts * fix: update method call after merge conflicts * chore: extract service name calculation to a method * chore: extract container name calculation to a method * chore: refactor get container name method * chore: refactor method even more * chore: use installer state to retrieve container name * chore: use installer when calculating service name * fix: adapt service names for fleet server * chore: enrich log when creating an installer * fix: use fleet server host when creating fleet config * fix: use https when connecting to fleet-server It's creating its own self-signed certs * feat: bootstrap a fleet server before a regular agent is deployed to fleet It will define the server host to be used when enrolling agents * fix: use fleet policy for agents, not the server one * fix: get different installers for fleet-server and agents * fix: use the old step for deploying regular agents * chore: rename variable with consistent name * chore: rename fleet-server scenario * fix: use proper container name for standalone mode * chore: save two variables * chore: rename standalone scenario for bootstrapping fleet-server * chore: rename bootstrap methods * chore: encapsulate bootstrap fleet-server logic * Update fleet.go * chore: remove Fleet Server CI parallel execution * chore: remove feature file for fleet-server * chore: boostrap fleet server only once We want to have it bootstrapped for the entire test suite, not for each scenario * fix: an agent was needed when adding integrations Co-authored-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com> * apm-server tests (#1083) * some tests for apm-server * clean op dir on init instead of after * fix agent uninstall (#1111) * Kubernetes Deployment (#1110) * Kubernetes Deployment Signed-off-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com> * Expose hostPort for kibana, elasticsearch, fleet without needing ingress This is nice for local development where you don't need an ingress and are relatively sure that the host system has the required ports available to bind to. Signed-off-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com> * Auto bootstrap fleet during initialize scenario (#1116) Signed-off-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com> Co-authored-by: Manuel de la Peña <mdelapenya@gmail.com> * feat: support running k8s autodiscover suite for Beats PRs and local repositories (#1115) * chore: add license * chore: initialise configurations before test suite * chore: use timeout_factor from env * fix: tell kind to skip pulling beats images * chore: add a method to load images into kind * feat: support running k8s autodiscover for Beats PRs or local filesystem * chore: add license header * chore: expose logger and use it, simplifying initialisation * fix: only run APM services for local APM environment * Revert "chore: expose logger and use it, simplifying initialisation" This reverts commit a89325c. * chore: log scenario name * fix: always cache beat version for podName * chore: reduce log level Co-authored-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com> * chore: initialise timeout factor next to the declaration (#1118) * chore: initialise timeout factor on its own package * chore: reuse timeout factor from common * Unify fleet and stand-alone suites (#1112) * fix agent uninstall * unify fleet and stand alone suites * move things around a bit more * fixe bad merge * simplify some things * chore: remove unused code (#1119) * chore: remove unused code * chore: remove all references to fleet server hostname Because we assume it's a runtime dependency, provided by the initial compose file, we do not need to calculate service names, or URIs for the fleet-service endpoint. Instead, we assume it's listening in the 8220 port in the "fleet-server" hostname, which is accessible from the network created by docker-compose. * fix: use HTTP to connect to fleet-server * chore: remove fleet server policy code We do not need it anymore, as the fleet server is already bootstrapped * chore: remove all policies but system and fleet_server * Update policies.go * Update fleet.go * Update stand-alone.go Co-authored-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com> * Support multiple deployment backends (#1130) * Abstract out deployment Provides ability to plugin different deployment backends for use in testing. Current deployment backends supported are "docker" and "kubernetes" * remove unused import * remove unsetting of fleet server hostname as it's not needed * add deployer support to stand-alone * add elastic-agent to k8s deployment specs * Update internal/docker/docker.go Signed-off-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com> Co-authored-by: Manuel de la Peña <mdelapenya@gmail.com> * fix: bump stale agent version to 7.12-snapshot * chore: abstract process checks to the deployer (#1156) * chore: abstract process checks to the deployer * chore: rename variable in log entry * docs: improve comment * fix: go-fmt * feat: simplify the initialisation of versions (#1159) * chore: use fixed version in shell scripts * chore: move retry to utils We could move it to its own package, but at this moment it's very small * chore: initialise stackVesion at one single place * chore: initialise agent version base at one single place * chore: initialise agent version at one single place * chore: reduce the number of requests to Elastic's artifacts endpoint * chore: rename AgentVersionBase variable to BeatVersionBase * chore: rename AgentVersion variable to BeatVersion * chore: use Beat version in metricbeat test suite * chore: check if the version must use the fallback after coming from a Git SHA * feat: support flavours in services, specially in the elastic-agent (#1162) * chore: move compose to deploy package * feat: use a ServiceRequest when adding services * feat: add service flavour support * chore: remove unused centos/debian services * fixup: add service flavour * chore: move docker client to the deploy package We will need another abstraction to represent the Docker client operations, as it's clear what is a deployment and what is an operation in the deployment. Maybe a Client struct for each provider will help out in differenciate it * chore: use ServiceRequest everywhere * chore: run agent commands with a ServiceRequest * chore: use ServiceRequest in metricbeat test suite * chore: pass flavours to installers * chore: add a step to install the agent for the underlying OS * chore: always add flavour * fix: use installer for fleet_mode when removing services at the end of the scenario * fix: update broken references in metricbeat test suite * fix: update broken references in helm test suite * fix: standalone does not have an installer * fix: use service instead of image to get a service request for the agent * feat: support for scaling services in compose * fix: run second agent using compose scale option * fix: update kibana's default Docker namespace * feat: make a stronger verification of fleet-server being bootstrapped (#1164) * fix: resolve issues in k8s-autodiscover test suite (#1171) * chore: use timeout factor when tagging docker images * fix: resolve alias version in k8s-autodiscover test suite * fix: use common versions for k8s-autodiscover * fix: update background processes to 2 instances Co-authored-by: Adam Stokes <51892+adam-stokes@users.noreply.github.com> Co-authored-by: Juan Álvarez <juan.alvarez@elastic.co>
What does this PR do?
This PR comes with a big refactor of the internals of the installer & deployer structs:
compose.gofile from thecomposepackage to thedeployone.docker-compose.ymlfile will represent the service in that particular flavour.request.Name(@adam-stokes, would need help with the kubernetes implementation to support flavours in k8s descriptors)executeCompose), and that one will be the responsible of unwrapping the struct, adding a path separator between the service and its flavour: i.e.elastic-agent/centosdeployer.Client.Operation🤔cloudor the regular one, it will be passed to the methods.Why is it important?
The most important goal is to simplify project layout, not only in the file system but in the structs used to represent the different installers. Once achieved we can extend the flavours in an easy manner. Apart from that, the code now it's more structured in the way of the input parameters used to create services: instead of strings, the user will have to create a service request on purpose.
Checklist
make noticein the proper directory)Author's Checklist
How to test this PR locally
Running the tests for Fleet (fleet_agent_mode and backend_processes)
Related issues
Follow-ups
There is room for improvement in the installer struct. I did not want to continue refactoring it to avoid a long lived branch. We can do it in follow-ups