Skip to content
  • Media
  • News
6 min read

From Measurement to Outcomes: Progress and the Way Forward for Affiliate Trust

A reflection on what has changed since the second edition of the Affiliate Trust & Data Index, contextualised through the aim and value of the Index.

When the first and second editions of the Affiliate Trust & Data Index were published, the goal was clarity. Not rankings for their own sake, but a shared, outcome‑based view of how affiliate platforms actually behave in practice.

This article does not introduce a third edition of the Index. Instead, it reflects on what has genuinely changed since v2, what has not, and where the direction of travel is becoming clearer. It focuses on operational reality rather than roadmap intent, and on trust as it is experienced day to day by affiliates operating at scale.

What progress actually looks like (and what it doesn’t)

The Affiliate Trust & Data Index has always drawn a clear distinction between stated capability and practical access. Trust is not built on what platforms claim to support, but on what affiliates can reliably use in their day‑to‑day operations.

Since v2, progress has been uneven. Some platforms have expanded their technical capabilities, particularly around reporting depth, APIs, and data normalisation. In a small number of cases, this has translated into faster access to performance data, reduced manual reporting, and clearer attribution.

More often, progress has been conditional. Features may exist at platform level but are selectively enabled, restricted by operator configuration, or subject to bespoke commercial arrangements. As a result, affiliates operating on the same platform frequently experience materially different levels of access and consistency.

GGR reporting illustrates this gap clearly. While many platforms technically support granular GGR, it is frequently disabled, delayed, or only partially exposed. From an affiliate perspective, a supported feature that is not enabled by default is functionally irrelevant.

The Index reflects this reality by measuring outcomes rather than intent. It assesses what affiliates can access without exception handling, bespoke intervention, or ongoing negotiation. This explains why some platforms remain positioned in Manual Maze or Easy but Opaque despite feature expansion. Optional transparency is not transparency.

The usability gap: When features exist but workflows don’t

Feature availability alone is a poor indicator of real‑world usability. Many platforms now present long feature lists, yet affiliates continue to encounter friction when attempting to operate at scale.

The issue is not whether functionality exists, but how it is exposed. UI‑led workflows, read‑only APIs, and weak or inconsistent documentation create bottlenecks as operations expand across brands, markets, and partners. Manual processes introduce delay, error, and operational dependency.

Campaign setup, reporting, validation, and reconciliation often require steps that should be programmable. Affiliates are forced to build workarounds, maintain one‑off logic per platform, or fall back to CSV exports. At scale, this becomes operational debt.

Automation is no longer optional. Managing accounts, campaigns, links, and sub‑affiliate structures at scale requires programmatic control. Where platforms rely on manual configuration or partial APIs, affiliates cannot build stable, repeatable workflows.

This is why usability carries significant weight in the Affiliate Trust & Data Index. A feature that cannot be automated, validated, and reproduced does not translate into operational value, regardless of technical sophistication. Checkbox features do not equal maturity.

The way forward: Shared responsibility, not platform scoring alone

One of the clearest learnings from the first two editions of the Index is that trust outcomes are shaped by more than platform capability alone. Platforms provide infrastructure, but operators determine how that infrastructure is configured and applied.

From an affiliate perspective, trust is not defined by whether a feature exists, but by whether it is accessible, enabled, and reliable by default. Operator defaults increasingly dictate what affiliates can rely on in practice, regardless of what is technically possible.

The value of the Index lies not in ranking platforms, but in establishing shared baselines and a common language. Clear baselines reduce misalignment, prevent trust being framed as a dispute, and embed transparency into everyday operations rather than individual negotiation.

Seen this way, the Affiliate Trust & Data Index functions as a moving benchmark. Its role is to surface where outcomes diverge from intent and to support better alignment between platforms, operators, and affiliates.

Direction of travel: API‑first and AI readiness

This is not a commitment to a timeline, but a clear signal of where expectations are heading.

API‑first as a baseline:

Data that cannot be accessed reliably via API cannot support modern affiliate operations.

UI‑only reporting may still be tolerable for low‑volume or manual use cases, but it does not scale. It cannot be automated, validated consistently, or integrated into broader workflows. As affiliate operations grow across brands, markets, and channels, APIs become the primary interface rather than a secondary convenience.

An API‑first expectation means: – Reporting data is available via API by default, not on request. – APIs behave consistently across operators on the same platform. – Pagination, filtering, and stable schemas are standard, not optional. – Read and write capabilities exist where operational workflows require them, not only read‑only exports.

Dashboards remain useful, but they are no longer the source of truth. They are a presentation layer on top of data designed to be consumed programmatically.

AI readiness in practical terms:

AI readiness is often misunderstood as the presence of AI features inside a platform. That is not the focus here.

From an Affiliate Trust & Data Index perspective, AI readiness is about whether a platform’s data foundations can safely and reliably support automation, optimisation, and decision‑making without manual intervention.

In practical terms, this means: – Structured and well‑defined data models. – Consistent field definitions across operators and accounts. – Predictable data availability with clearly defined refresh behaviour. – Stable, versioned, and well‑documented APIs suitable for automated consumption. – Security and auditability that allow data to be used confidently in automated workflows.

Without these foundations, AI‑driven optimisation becomes guesswork. Automation amplifies inconsistency rather than insight, increasing the risk of incorrect decisions at scale.

This is not a call for more features. It is a call for stronger foundations

Why this still isn’t v3

This article deliberately avoids introducing new scores or re‑ranking platforms. While capability has improved in places, the structural conditions required for a meaningful third edition of the Affiliate Trust & Data Index are not yet in place.

A v3 should not be triggered by incremental feature additions or roadmap claims. It should reflect a clear shift in how platforms operate by default, not by exception. Two gating conditions have emerged as decisive.

First, API‑first must move from aspiration to norm. Core reporting and operational data should be accessible via stable, well‑documented APIs by default, not negotiated on a per‑operator basis. UI‑only access, partial APIs, or read‑only integrations remain signals of limited operational maturity. Until API access is consistent and reliable across operators on the same platform, re‑scoring risks overstating progress.

Second, AI readiness must be real, not implied. This does not mean platforms offering AI features, but platforms providing data foundations that can safely support automation and decision‑making at scale. Consistent schemas, predictable refresh behaviour, versioned APIs, and auditable data flows are prerequisites. Without them, automation amplifies inconsistency rather than value.

Until these conditions are met broadly across the ecosystem, a new edition would change little beyond optics. The more honest contribution is to describe progress accurately, clarify where the gaps remain, and make explicit what would constitute meaningful improvement.

When API‑first access and practical AI readiness become default rather than exception, a v3 will be warranted. Until then, the Affiliate Trust & Data Index remains a living benchmark, focused on outcomes rather than intent.

This article was authored by Ian Wright, Aidan Tanti, and Keith Cassar.


Stay ahead in leadership with the latest insights from industry experts. Subscribe today.

Keith Cassar
Keith Cassar
CTO
Published on January 18, 2026