black and gray stethoscope

Orchestrating AI in clinical settings with jBPM

jBPM orchestrates external AI, specifically for integrating AI-powered features into clinical settings like stroke prediction with…
Home » Blog » Orchestrating AI in clinical settings with jBPM

jBPM orchestrates external AI, specifically for integrating AI-powered features into clinical settings like stroke prediction with OpenEMR.

In part one and part two of this series, we’ve seen how jBPM can be used as a platform for orchestrating external AI-centric environments, such as Python, used for designing and running AI solutions. This article brings it all together to show how effective jBPM can be in integrating AI-powered features into a clinical setting.

jBPM, OpenEMR and Python for stroke prediction

Stroke Prediction solution deployed in jBPM is run by the users of OpenEMR

A comprehensive illustrative example of jBPM orchestrating Python is our solution for training AI models and estimating stroke risk in a set of patients. In a hospital or clinical setting, physicians use an EMR (Electronic Medical Record) system like open-source OpenEMR (or any other software allowing customization of user screens). Stroke prediction requests may originate from various sections of the EMR, typically within a patient’s file:

A standard patient file screen in jBPM with a Health AI Solution Catalog

The physician selects the stroke prediction AI solution and submits a prediction request to jBPM, where the AI logic is deployed:

The patient file screen displaying stroke risk prediction results

jBPM processes the request, orchestrating the AI logic and Python computations, and returns the estimated stroke risk to OpenEMR:

Process instance logging screen in jBPM displaying a prediction request (ID: 977) from a physician

The stroke risk is initially calculated in jBPM before being returned to OpenEMR for the physician:

The process instance record (ID: 977) in jBPM displays the computed stroke risk to be returned to OpenEMR

This prediction workflow, when visualized end-to-end, appears as follows:

End-to-end flow of stroke prediction across OpenEMR and jBPM

Meanwhile, an analyst monitoring the AI models’ performance can retrain and publish updated models:

Custom Health AI screen in OpenEMR for training AI models via jBPM

Similar to the physician’s workflow, the analyst selects the AI solution and submits a train/validate/ test request to jBPM:

Custom Health AI screen displaying outcomes of train/validate/test requests

jBPM logs and executes the training process:

Process instance logging screen in jBPM displaying a train/validate/test request (ID: 978) from an analyst

Results are computed and returned to OpenEMR:

Process instance record in jBPM displaying computed accuracy metrics to be returned to OpenEMR

Load testing confirms that jBPM efficiently handles simultaneous user requests for both prediction and model training without bottlenecks or data leaks:

OpenEMR screens displaying stroke prediction and model training results from load testing

The following video shows how all the above works live. We would be happy to support the interested reader with getting it up and running in their IT landscape: reach out to us for guidance and advice.

For more: C-NLTX/Open-Source

Read Part 1 and Part 2:

Author

  • Sergey Lukyanchikov

    I design-to-market development, deployment and production accelerators for in-platform computations. I have contributed to the evolution of product offering at well-known data platform vendors. My LinkedIn: https://www.linkedin.com/in/lukyanchikov/

    View all posts

If you enjoyed this post, you might also enjoy these