Skip to content

support reloading workflows at runtime #3156

@reubenmiller

Description

@reubenmiller

Is your feature improvement request related to a problem? Please describe.

With the upcoming feature to support mapping customer Cumulocity IoT operation to local thin-edge.io commands (see #3095), this will encourage more users to deploy new workflows to the devices. The typical way to package a new custom operation handlers, would be to package it has a linux package where the customer operation definition which typically consists of:

  • custom cloud operation handler definition (this will be a new type) , .e.g. /etc/tedge/operations/c8y/c8y_Command.template
  • workflow definition for the new functionality, .e.g. /etc/tedge/operations/execute_shell.toml
  • Some custom script/binary which is called from the workflow, e.g. /usr/bin/execute_shell.sh

Deploying a package would be typically done at runtime using thin-edg.io's Software Management plugin (e.g. tedge-apt-plugin). However currently if a new workflow is created, the tedge-agent service must be restarted before it detects the new workflow that was installed by the linux package. Restarting the tedge-agent from within the Software Management plugin by way of Maintainer Scripts is not feasible as it would kill the script's parent process, which would result in the maintainer script also being killed.

Describe the solution you'd like

tedge-agent should support a way for users to tell the service to reload the workflows from disk.

A typical linux way to do this would be to support "tell" an application of new configuration would be to reload the configuration when receiving a SIGHUP signal. This way the user could control when the workflows are reloaded/re-read disk.

For example, the following command could be used to trigger a reload:

kill -s HUP <pid>

And by supporting SIGHUP, then it would enable the tedge-agent systemd service definition to support the systemctl reload tedge-agent syntax by adding the following field:

[Service]
ExecReload=/usr/bin/kill -HUP $MAINIP

Open Questions

  • How should the current workflow react if it is changed whilst processing the current operation? Should an in-memory model be preserved for the "in-progress" workflow? And how would that effect the device profile, as it might be useful for users to install a new workflow and then use that workflow in the same device profile operation (as long as the new workflow is only affecting a subworkflow and not the device_profile itself)

Describe alternatives you've considered

Alternatively, the workflows could be loaded when a new change is detected (e.g. via iNotify). However this would present a challenge when adding multiple workflows files which have some inter-dependencies (e.g. workflow A calls workflow B etc.), then you don't want to load Workflow A before Workflow B exists on disk.

Additional context

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions