Skip to content

Update to the latest version of WP AI Client#242

Closed
dkotter wants to merge 19 commits intoWordPress:developfrom
dkotter:update/wp-ai-client
Closed

Update to the latest version of WP AI Client#242
dkotter wants to merge 19 commits intoWordPress:developfrom
dkotter:update/wp-ai-client

Conversation

@dkotter
Copy link
Collaborator

@dkotter dkotter commented Feb 20, 2026

What?

Closes #241

Updates the wp-ai-client from 0.2.1 to 0.3.0. This also brings in the latest version of the php-ai-client, going from 0.3.1 to 1.1.0. The biggest change required here is AI providers are no longer bundled in php-ai-client so we also pull in those providers via composer.

Why?

Ensures we're running the latest version of required dependencies.

How?

  • Bumped the version number for wp-ai-client in composer.json and ran composer update wordpress/wp-ai-client
  • Needed to pull in the AI providers so ran composer require wordpress/anthropic-ai-provider, composer require wordpress/google-ai-provider and compose require wordpress/openai-ai-provider
  • Added some code to ensure those providers were all registered prior to loading the WP AI Client which ensures the AI Credentials settings page renders settings for those providers. Also needed to initialize the HTTP transport (WP_AI_Client_Discovery_Strategy::init()) since that happens in the AI Client init but we need this before that
  • Fixed a new PHPStan error around types

Use of AI Tools

Majority of changes were done by me. Used Cursor running Composer 1.5 to fix some linting and test failures

Testing Instructions

Ensure the AI Credentials page still shows settings for Anthropic, Google and OpenAI.

Test all Experiments to ensure they still work as expected

Open WordPress Playground Preview

@dkotter dkotter added this to the 0.4.0 milestone Feb 20, 2026
@dkotter dkotter self-assigned this Feb 20, 2026
@github-actions
Copy link

github-actions bot commented Feb 20, 2026

The following accounts have interacted with this PR and/or linked issues. I will continue to update these lists as activity occurs. You can also manually ask me to refresh this list by adding the props-bot label.

If you're merging code through a pull request on GitHub, copy and paste the following into the bottom of the merge commit message.

Co-authored-by: dkotter <dkotter@git.wordpress.org>
Co-authored-by: jeffpaul <jeffpaul@git.wordpress.org>
Co-authored-by: Jameswlepage <isotropic@git.wordpress.org>
Co-authored-by: felixarntz <flixos90@git.wordpress.org>
Co-authored-by: soderlind <pers@git.wordpress.org>

To understand the WordPress project's expectations around crediting contributors, please review the Contributor Attribution page in the Core Handbook.

@codecov
Copy link

codecov bot commented Feb 20, 2026

Codecov Report

❌ Patch coverage is 6.34921% with 59 lines in your changes missing coverage. Please review.
✅ Project coverage is 55.16%. Comparing base (4fbb197) to head (66cde4b).
⚠️ Report is 43 commits behind head on develop.

Files with missing lines Patch % Lines
...es/Abilities/Title_Generation/Title_Generation.php 0.00% 16 Missing ⚠️
includes/Abilities/Image/Generate_Image.php 0.00% 12 Missing ⚠️
includes/Provider_Loader.php 0.00% 10 Missing ⚠️
includes/Client_Loader.php 0.00% 8 Missing ⚠️
includes/bootstrap.php 0.00% 7 Missing ⚠️
...bilities/Excerpt_Generation/Excerpt_Generation.php 0.00% 1 Missing ⚠️
includes/Abilities/Image/Alt_Text_Generation.php 0.00% 1 Missing ⚠️
includes/Abilities/Image/Generate_Image_Prompt.php 0.00% 1 Missing ⚠️
includes/Abilities/Summarization/Summarization.php 0.00% 1 Missing ⚠️
includes/compat.php 66.66% 1 Missing ⚠️
... and 1 more
Additional details and impacted files
@@              Coverage Diff              @@
##             develop     #242      +/-   ##
=============================================
- Coverage      56.69%   55.16%   -1.53%     
- Complexity       505      520      +15     
=============================================
  Files             32       35       +3     
  Lines           2568     2650      +82     
=============================================
+ Hits            1456     1462       +6     
- Misses          1112     1188      +76     
Flag Coverage Δ
unit 55.16% <6.34%> (-1.53%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Comment on lines +267 to +280
for ( $i = 0; $i < $candidates; $i++ ) {
$result = AI_Client::prompt_with_wp_error( '"""' . $context . '"""' )
->using_system_instruction( $this->get_system_instruction() )
->using_temperature( 0.5 )
->using_model_preference( ...get_preferred_models_for_text_generation() )
->generate_text();

// Return immediately if we have an error.
if ( is_wp_error( $result ) ) {
return $result;
}

$results[] = $result;
}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The newest version of the PHP AI Client now uses the OpenAI Responses API instead of the Chat Completions API. That API no longer supports returning multiple results:

Additionally, Chat Completions can return multiple parallel generations as choices, using the n param. In Responses, we’ve removed this param, leaving only one generation

I've updated here to run multiple requests to get multiple results which maintains backward compatibility. We can remove this if we want to just return a single title result.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the rough difference on token count between getting 3 results from the chat completions versus 3 requests to responses? If its noticeable, then I might be ok with reducing that down to a single result as they can always regenerate.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Running a quick test, using Chat Completions to get three results used 1067 total tokens. Using this new approach, a single request used 1039 total tokens so 3117 total for all three requests. So not surprisingly, about a 3x increase in token usage though still such small numbers that unlikely to see any impact on actual cost.

That said, this approach forces this for all Provider/model combinations and there are certainly some that would support the old approach (Google for instance does, Anthropic does not). So I think three options we can discuss here:

  1. Only return a single result irregardless of the model being used
  2. If multiple results are wanted, always make multiple requests irregardless of what the model supports
  3. If multiple results are wanted, check what the model supports and if it supports returning multiple candidates, keep things with the old approach. If it doesn't support multiple (like OpenAI or Anthropic) make multiple requests to get the number of results desired

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now I'm fine with option 1, especially if that's filterable as it keeps token usage low and someone can always mash the regenerate button for as many additional suggestions as theyd want

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now the title generation Ability allows you to pass in a candidates parameter (limited between 1 and 10). This is used to determine how many titles to generate. In our case, we set this to 3 when we use this Ability, though others can also use this Ability and set that to something else.

Because of this, I think ideal here is to keep that parameter which also means we need to keep this code change but we can change our integration to only return 1 result to limit token usage

Comment on lines +283 to +284
// Ensure the HTTP transporter is initialized.
WP_AI_Client_Discovery_Strategy::init();
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JasonTheAdams @felixarntz After updating to the latest version of the WP AI Client (which pulls in the latest PHP AI Client), have to pull in each provider and initialize those, which I'm doing here.

I ran into an issue though where the HTTP transport wasn't setup so requests wouldn't work. I've added this line here and it fixes it but curious if there's something I'm missing or a better approach? I know the init method of the AI_Client runs this same code but seems we need to initialize providers prior to initializing the client so duplicating that here

@dkotter
Copy link
Collaborator Author

dkotter commented Feb 20, 2026

Currently getting a fatal error when run on WP Trunk:

Fatal error: Declaration of WordPress\AI_Client\HTTP\WordPress_HTTP_Client::sendRequestWithOptions(Psr\Http\Message\RequestInterface $request, WordPress\AiClient\Providers\Http\DTO\RequestOptions $options): Psr\Http\Message\ResponseInterface must be compatible with WordPress\AiClient\Providers\Http\Contracts\ClientWithOptionsInterface::sendRequestWithOptions(WordPress\AiClientDependencies\Psr\Http\Message\RequestInterface $request, WordPress\AiClient\Providers\Http\DTO\RequestOptions $options): WordPress\AiClientDependencies\Psr\Http\Message\ResponseInterface in /var/www/html/wp-content/plugins/ai/vendor/wordpress/wp-ai-client/includes/HTTP/WordPress_HTTP_Client.php on line 97

I think this is likely due to conflicts between the version of the WP AI Client in trunk compared to the version we load in this plugin. This needs resolved prior to 7.0 being released but I don't think it needs to be resolved in this PR, but open to opinions on that.

@jeffpaul
Copy link
Member

I think this is likely due to conflicts between the version of the WP AI Client in trunk compared to the version we load in this plugin. This needs resolved prior to 7.0 being released but I don't think it needs to be resolved in this PR, but open to opinions on that.

Assuming that's coming from tests, we can probably ignore trunk results for now (I know that's risky in case something else is failing) but until we update to ONLY use the client in core we'll have issues. And we know from the 0.1.0 launch that if we set 7.0 as the min to then focus on the ai client in core that we'll also have issues there, and I suspect keeping 6.9 as min and 7.0 as tested up to will also have things fail. Sigh.

@dkotter
Copy link
Collaborator Author

dkotter commented Feb 20, 2026

Assuming that's coming from tests, we can probably ignore trunk results for now

The same fatal error happens if you try running this branch on trunk right now so it's not just failing in tests. A fatal doesn't occur if running the latest release of this plugin on trunk though no provider settings show up so can't use things.

I think discussion around if we want to set 7.0 as our minimum and just remove all the WP AI Client code from this plugin or if we want to try and support 6.9 still. And then need to switch to using the new wp_ai_client_prompt function. So I'd say we should tackle those in a follow-up PR.

@Jameswlepage
Copy link
Contributor

I think discussion around if we want to set 7.0 as our minimum and just remove all the WP AI Client code from this plugin or if we want to try and support 6.9 still.

I think that we should make 7.0 the minimum supported version for AI features in WP.

@felixarntz
Copy link
Member

I think that we should make 7.0 the minimum supported version for AI features in WP.

+1

With the WP AI Client now merged into Core, the project will effectively be discontinued. Only PHP AI Client will be maintained as an external library, but even that WordPress plugins should not require via Composer because it's bundled in Core.

@Jameswlepage
Copy link
Contributor

@felixarntz what are your thoughts on the transitory period between now and then?

@jeffpaul
Copy link
Member

I think that we should make 7.0 the minimum supported version for AI features in WP.

I'm fine with that change closer to / just after RC1, but until then it makes the plugin unusable on current major installs which I'm not super keen on.

@felixarntz
Copy link
Member

felixarntz commented Feb 21, 2026

I think this thread has what we should do: https://wordpress.slack.com/archives/C08TJ8BPULS/p1771606845472169?thread_ts=1771606845.472169&cid=C08TJ8BPULS

Release another version of wp-ai-client that shims the wp_ai_client_prompt() function for future compat of people coding against it.

Plus a check that, if the wp_ai_client_prompt() function already exists (i.e. 7.0 branch), we skip loading pretty much everything and rely on the Core implementation.

And then once WordPress 7.0 reaches RC, the AI plugin would switch over to the Core implementation entirely. It would need to ensure that sites that update would then be prompted to (re-)install the provider plugins, which so far are being bundled.

@dkotter
Copy link
Collaborator Author

dkotter commented Feb 21, 2026

Release another version of wp-ai-client that shims the wp_ai_client_prompt() function for future compat of people coding against it.

Plus a check that, if the wp_ai_client_prompt() function already exists (i.e. 7.0 branch), we skip loading pretty much everything and rely on the Core implementation.

Okay, I think I have this setup now. I've added new Client_Loader and Provider_Loader classes that will load the AI Client and our providers if the wp_ai_client_prompt function doesn't exist. This ensures things continue to work as normal on WP 6.9.

This also resolves the fatal error we were getting on 7.0-beta1, though note you still can't test the plugin on beta1 as the settings screen doesn't exist there yet (and you'll need to manually install the provider plugins).

I've also added in a compat.php file that shims the wp_ai_client_prompt function. This file only loads if the function doesn't exist (so on WP 6.9). I've then switched over all of our prompt requests to use this function instead of running AI_Client::prompt_with_wp_error.

There is overlap between what we're doing here and what is being done in #249 so I think need to decide on which approach we want to take (happy to close this PR out and focus on #249 if we want).

@soderlind
Copy link

soderlind commented Feb 28, 2026

@jeffpaul, before you merge this, please look at my findings in #261. Used AI Experiments to test my AI Provider for Azure OpenAI and had a fight.

@jeffpaul
Copy link
Member

jeffpaul commented Mar 2, 2026

@dkotter note sure if we want to bump to https://github.com/WordPress/wp-ai-client/releases/tag/0.4.0 now or wait to do that in 0.5.0 when we switch to 7.0 for minimum/tested-up-to in the Experiments plugin?

@dkotter
Copy link
Collaborator Author

dkotter commented Mar 3, 2026

As mentioned here, we're going to keep dependencies where they're at for the next release. For the release after, we'll be removing these dependencies all together and setting WP 7.0 as our minimum. As such, going to close this PR out and pull the relevant bits into a new PR focused exclusively on WP 7.0 compat

@dkotter dkotter closed this Mar 3, 2026
@dkotter dkotter removed this from the 0.4.0 milestone Mar 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Update to WP AI Client 0.3.0

5 participants