Skip to content

Roadmap items #2423

@gorkem-bwl

Description

@gorkem-bwl

This is a WIP issue for tracking the roadmap items. Other issues will be generated/derived from this meta roadmap item.

General

  • Executive dashboard: VerifyWise has basic statistics on the dashboard but lacks executive-level visualizations. Execs need simplified views for governance reporting. The solution is an exec dashboard module. This should include a compliance posture overview showing percentage completion across all active frameworks with trend lines. A risk heat map should visualize risks by category, severity, and business unit. Framework progress cards should show completion percentage, open items, and days since last update for each framework. A key metrics summary should display total controls, evidence items, pending assessments, and overdue tasks. Trend charts should show compliance improvement over selectable time periods (30/60/90 days, YTD). The dashboard should be filterable by project, framework, or business unit. Quick export to PDF for board presentations should be available.
  • External Auditor Portal: External auditors currently need full system access to review compliance documentation, or compliance teams must manually export and share documents. There's no dedicated read-only workspace for auditors with controlled access to relevant evidence. The solution is an external auditor portal. This should be a read-only workspace for external auditors with scoped access. Auditors should be able to view specific frameworks, controls, and evidence without seeing unrelated data. Access should be time-limited with automatic expiration. Activity logging should track what auditors view and download. Comment and question capability should let auditors request additional information. Evidence download should be available with watermarking.
  • Continuous Monitoring: A continuous control monitoring should include evidence freshness tracking with configurable validity periods per evidence type and automatic alerts when evidence approaches or exceeds expiration. Scheduled control reviews should allow recurring review assignments with escalation for overdue reviews. A compliance drift detection system should identify when control status changes and notify responsible parties. A monitoring dashboard should show real-time control health across all frameworks with drill-down capability.
  • AI compliance scorecard (to be detailed)
  • Custom table fields for entities (models/vendors/risks) (to be detailed)

Use-case inventory specific

  • AI Use Case Intake Wizard (approval workflows): Currently, there is no standardized workflow for registering new AI systems or use cases in VerifyWise. Users must manually create entries in the model inventory without guidance on what information to provide or how to assess initial risk levels. The model inventory allows direct creation without structured intake, there's no risk pre-screening before full registration, no workflow stages (draft → review → approved), and this leads to inconsistent data quality across AI system records. The solution is a multi-step intake wizard that guides users through AI system registration with built-in risk pre-screening.Use-case workflow approvals feature #2655
  • Add vendors to the use-case from the vendors list (to be detailed)
  • Add documents/evidence to the use-case (or link from Confluence) (to be detailed)

Model inventory

  • Lifecycle documentation: Enhance the Model Inventory into a true Model Registry tracking every stage (data collection, training, validation, deployment, monitoring, decommissioning). Require Model Cards or data sheets to be attached for each entry, capturing purpose, data sources, performance metrics, known limitations, and update history.
  • Model Card Generator: The model inventory captures AI system information but there's no standardized export format. Organizations cannot generate model cards in industry-standard formats that regulators and external stakeholders expect. Model cards (Google format, IBM format) are becoming industry standard for AI transparency. The solution is a model card generator. This should auto-generate model cards from model inventory data. Multiple formats can be supported: Google Model Cards, IBM FactSheets, custom organizational templates. Export options can include PDF, Markdown, and JSON formats. Model cards should include intended use, limitations, performance metrics, fairness considerations, and training data summary. Version history should track model card changes over time. Batch generation should create model cards for multiple models simultaneously.
  • Model Lineage Visualization: VerifyWise has MLflow integration for model tracking. On top of it we need a visual representation of model dependencies, data flows, version history or relationships between models. Users need see how models relate to each other or trace data lineage. Complex AI systems have interconnected models with shared data sources and dependencies. Impact analysis would be easier with lineage visualization. This should be an interactive graph showing model relationships, data sources, and dependencies. Users should be able to trace from data source through transformations to model outputs. Version timeline should show model evolution over time. Integration with MLflow should pull lineage data automatically where available. Export of lineage diagrams for documentation purposes should be supported.

Frameworks/policies

  • Add the ability to add custom policies (to be detailed)

Vendors

  • Automated flagging & monitoring: Implement triggers so that high-risk vendors or lapses auto-create tasks or risk entries. For example, if a critical vendor’s compliance certificate expires, the system flags the vendor and opens a risk issue.

Risks

  • In the UI, link risks to the Evidence Center and Controls. For example, a privacy risk might link to an EU AI Act controls and the latest DPIA documents (from Evidence Center). Automatically prompt documentation: if a risk is marked “In Progress,” require an evidence attachment and show which controls cover that risk.
  • Quantitative Risk Scoring Engine: Currently, VerifyWise uses qualitative risk levels (High, Medium, Low) without a standardized calculation methodology and risk assessments rely entirely on subjective judgment with no formula. Auditors and boards expect defensible, quantitative risk metrics. Without numerical scoring, organizations cannot prioritize risks objectively. The solution should include inherent risk calculation based on likelihood × impact scoring (configurable scales, e.g., 1-5 or 1-10). Residual risk should be calculated by factoring in control effectiveness ratings. The system needs a risk matrix visualization showing risk distribution across quadrants. Support should be added for both quantitative scores and qualitative labels (derived from score thresholds). Historical risk scores should be tracked to show risk trends over time.

Training

  • Automated reminders: Add automated reminders and expiry management. For any training with a refresh cycle (e.g. annual compliance training), generate calendar reminders to learners and supervisors when certifications approach expiration. Modern systems “track completions, send alerts, apply expiry rules” to stay audit-ready. For example, if an employee’s AI ethics training is two years old, the system sends a prompt. This requires adding an mjml template to the system about the email reminder for the person who added the training. Training expiry reminders #2654

Reporting

  • Advanced reports: There is a reporting functionality, but users need generate comprehensive compliance reports in formats suitable for board presentations, audit submissions etc. The solution is comprehensive report export functionality. Report types should include compliance status reports, risk assessment reports, control effectiveness reports, evidence inventory reports, and audit trail reports. Export formats should include PDF (formatted for printing/presentation), Excel (with data for further analysis), and Word (editable for customization). Template customization should allow branding with organization logo and colors. Scheduled reports should auto-generate and email on configurable schedules. Report history should maintain an archive of generated reports with timestamps.
  • Compliance Maturity Scoring: Organizations want to track their compliance journey through defined maturity levels or benchmark against industry standards. They want to demonstrate compliance progression. The solution is a compliance maturity scoring system. This should implement maturity levels (Initial, Developing, Defined, Managed, Optimizing) based on control implementation quality, not just completion. Assessment criteria should be configurable per framework. Maturity dashboards should show current level and path to the next level. Benchmarking should allow comparison against industry averages (anonymized). Improvement recommendations should suggest specific actions to advance maturity level. Historical tracking should show maturity progression over time.

Evidence center

  • Metadata tagging & versioning: Enhance document uploads with metadata tags (e.g. control ID, project). Allow users to version documents, that is, each upload of a policy or report creates a new version while preserving history. Display version history in the UI with diff notes.
  • AI-Powered Evidence Matching: Currently, users manually attach evidence to controls. They must review control requirements, find relevant evidence, and make the connection themselves (time-consuming and might duplicate efforts). The solution is AI-powered evidence matching. The system should analyze control requirements and suggest relevant evidence from the evidence library. Confidence scores should indicate match quality. Bulk suggestion mode should propose evidence mappings across multiple controls simultaneously. Learning should improve over time based on user acceptance/rejection of suggestions. Duplicate detection should identify when the same evidence satisfies multiple controls. Gap analysis should highlight controls with no matching evidence available.

Done

  • Allow control-level linkage from an uploaded evidence to a control Allow control-level linkage of files in Evidence Center #2653
  • Add NIST AI RMF framework Add NIST AI RMF framework #2669
  • Model change history
  • Upgrade the Vendor & Risk module by adding a vendor scorecard. Build on the existing impact×likelihood matrix to include multiple dimensions (e.g. data sensitivity, business criticality, past issues). Add Vendor Scorecard to Vendor & Risk Module #2657
  • Ability to have model versioning where each model version will be traced, what changed, by whom, and when
  • EU AI Act classification for the use-case
  • More fields for use case More fields for use cases #2424
  • Add analytics section for Models and Risk Management Add analytics tab for Risks and Model Inventory #2641
  • Model evidence hub: Introduce a method for managing AI model evidence files (certificates) in VerifyWise. Evidence files can be stored in an “Evidence Hub” and also shown directly inside each model’s detail page.
  • Policy templates: Provide a library of policy templates. When creating a new AI policy, allow users to start from templates (e.g. “AI Ethical Use Policy”, “Model Risk Management” or “Data Privacy” templates) with predefined sections. This speeds authoring and ensures consistency across policies. Policy templates library #2652

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions