Update to the latest version of WP AI Client#242
Update to the latest version of WP AI Client#242dkotter wants to merge 19 commits intoWordPress:developfrom
Conversation
…ewest PHP AI Client
…iders so they can connect properly
|
The following accounts have interacted with this PR and/or linked issues. I will continue to update these lists as activity occurs. You can also manually ask me to refresh this list by adding the If you're merging code through a pull request on GitHub, copy and paste the following into the bottom of the merge commit message. To understand the WordPress project's expectations around crediting contributors, please review the Contributor Attribution page in the Core Handbook. |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## develop #242 +/- ##
=============================================
- Coverage 56.69% 55.16% -1.53%
- Complexity 505 520 +15
=============================================
Files 32 35 +3
Lines 2568 2650 +82
=============================================
+ Hits 1456 1462 +6
- Misses 1112 1188 +76
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
…ponses API doesn't support candidates
| for ( $i = 0; $i < $candidates; $i++ ) { | ||
| $result = AI_Client::prompt_with_wp_error( '"""' . $context . '"""' ) | ||
| ->using_system_instruction( $this->get_system_instruction() ) | ||
| ->using_temperature( 0.5 ) | ||
| ->using_model_preference( ...get_preferred_models_for_text_generation() ) | ||
| ->generate_text(); | ||
|
|
||
| // Return immediately if we have an error. | ||
| if ( is_wp_error( $result ) ) { | ||
| return $result; | ||
| } | ||
|
|
||
| $results[] = $result; | ||
| } |
There was a problem hiding this comment.
The newest version of the PHP AI Client now uses the OpenAI Responses API instead of the Chat Completions API. That API no longer supports returning multiple results:
Additionally, Chat Completions can return multiple parallel generations as
choices, using thenparam. In Responses, we’ve removed this param, leaving only one generation
I've updated here to run multiple requests to get multiple results which maintains backward compatibility. We can remove this if we want to just return a single title result.
There was a problem hiding this comment.
What's the rough difference on token count between getting 3 results from the chat completions versus 3 requests to responses? If its noticeable, then I might be ok with reducing that down to a single result as they can always regenerate.
There was a problem hiding this comment.
Running a quick test, using Chat Completions to get three results used 1067 total tokens. Using this new approach, a single request used 1039 total tokens so 3117 total for all three requests. So not surprisingly, about a 3x increase in token usage though still such small numbers that unlikely to see any impact on actual cost.
That said, this approach forces this for all Provider/model combinations and there are certainly some that would support the old approach (Google for instance does, Anthropic does not). So I think three options we can discuss here:
- Only return a single result irregardless of the model being used
- If multiple results are wanted, always make multiple requests irregardless of what the model supports
- If multiple results are wanted, check what the model supports and if it supports returning multiple candidates, keep things with the old approach. If it doesn't support multiple (like OpenAI or Anthropic) make multiple requests to get the number of results desired
There was a problem hiding this comment.
For now I'm fine with option 1, especially if that's filterable as it keeps token usage low and someone can always mash the regenerate button for as many additional suggestions as theyd want
There was a problem hiding this comment.
Right now the title generation Ability allows you to pass in a candidates parameter (limited between 1 and 10). This is used to determine how many titles to generate. In our case, we set this to 3 when we use this Ability, though others can also use this Ability and set that to something else.
Because of this, I think ideal here is to keep that parameter which also means we need to keep this code change but we can change our integration to only return 1 result to limit token usage
includes/bootstrap.php
Outdated
| // Ensure the HTTP transporter is initialized. | ||
| WP_AI_Client_Discovery_Strategy::init(); |
There was a problem hiding this comment.
@JasonTheAdams @felixarntz After updating to the latest version of the WP AI Client (which pulls in the latest PHP AI Client), have to pull in each provider and initialize those, which I'm doing here.
I ran into an issue though where the HTTP transport wasn't setup so requests wouldn't work. I've added this line here and it fixes it but curious if there's something I'm missing or a better approach? I know the init method of the AI_Client runs this same code but seems we need to initialize providers prior to initializing the client so duplicating that here
|
Currently getting a fatal error when run on WP Trunk:
I think this is likely due to conflicts between the version of the WP AI Client in trunk compared to the version we load in this plugin. This needs resolved prior to 7.0 being released but I don't think it needs to be resolved in this PR, but open to opinions on that. |
Assuming that's coming from tests, we can probably ignore trunk results for now (I know that's risky in case something else is failing) but until we update to ONLY use the client in core we'll have issues. And we know from the 0.1.0 launch that if we set 7.0 as the min to then focus on the ai client in core that we'll also have issues there, and I suspect keeping 6.9 as min and 7.0 as tested up to will also have things fail. Sigh. |
The same fatal error happens if you try running this branch on I think discussion around if we want to set 7.0 as our minimum and just remove all the WP AI Client code from this plugin or if we want to try and support 6.9 still. And then need to switch to using the new |
I think that we should make 7.0 the minimum supported version for AI features in WP. |
+1 With the WP AI Client now merged into Core, the project will effectively be discontinued. Only PHP AI Client will be maintained as an external library, but even that WordPress plugins should not require via Composer because it's bundled in Core. |
|
@felixarntz what are your thoughts on the transitory period between now and then? |
I'm fine with that change closer to / just after RC1, but until then it makes the plugin unusable on current major installs which I'm not super keen on. |
|
I think this thread has what we should do: https://wordpress.slack.com/archives/C08TJ8BPULS/p1771606845472169?thread_ts=1771606845.472169&cid=C08TJ8BPULS Release another version of Plus a check that, if the And then once WordPress 7.0 reaches RC, the AI plugin would switch over to the Core implementation entirely. It would need to ensure that sites that update would then be prompted to (re-)install the provider plugins, which so far are being bundled. |
…t doesn't exist, allowing us to use this function on WP 6.9. Load this compat file if we need it
Okay, I think I have this setup now. I've added new This also resolves the fatal error we were getting on 7.0-beta1, though note you still can't test the plugin on beta1 as the settings screen doesn't exist there yet (and you'll need to manually install the provider plugins). I've also added in a There is overlap between what we're doing here and what is being done in #249 so I think need to decide on which approach we want to take (happy to close this PR out and focus on #249 if we want). |
…date the AI providers to their new repo
|
@jeffpaul, before you merge this, please look at my findings in #261. Used |
|
@dkotter note sure if we want to bump to https://github.com/WordPress/wp-ai-client/releases/tag/0.4.0 now or wait to do that in 0.5.0 when we switch to 7.0 for minimum/tested-up-to in the Experiments plugin? |
|
As mentioned here, we're going to keep dependencies where they're at for the next release. For the release after, we'll be removing these dependencies all together and setting WP 7.0 as our minimum. As such, going to close this PR out and pull the relevant bits into a new PR focused exclusively on WP 7.0 compat |
What?
Closes #241
Updates the
wp-ai-clientfrom 0.2.1 to 0.3.0. This also brings in the latest version of thephp-ai-client, going from 0.3.1 to 1.1.0. The biggest change required here is AI providers are no longer bundled inphp-ai-clientso we also pull in those providers via composer.Why?
Ensures we're running the latest version of required dependencies.
How?
wp-ai-clientin composer.json and rancomposer update wordpress/wp-ai-clientcomposer require wordpress/anthropic-ai-provider,composer require wordpress/google-ai-providerandcompose require wordpress/openai-ai-providerWP_AI_Client_Discovery_Strategy::init()) since that happens in the AI Client init but we need this before thatUse of AI Tools
Majority of changes were done by me. Used Cursor running Composer 1.5 to fix some linting and test failures
Testing Instructions
Ensure the AI Credentials page still shows settings for Anthropic, Google and OpenAI.
Test all Experiments to ensure they still work as expected