Limit memory used by tokeniser#16621
Closed
chrisdavenport wants to merge 1 commit intojoomla:stagingfrom
Closed
Conversation
mbabker
approved these changes
Jun 30, 2017
Member
|
This has since become outdated. Thanks for your work, however the tokeniser has been improved in other ways which hopefully make this obsolete. |
Member
|
Closing. Thank you for the work. Please check the tokeniser in latest nightly build. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Note: This replaces and is the same as #12649 but without the branch merge that I was unable to recover from. It includes the suggestions by @nikosdion such that the new memory cache handler is not listed in global configuration.
Summary of Changes
Running the Smart Search indexer on a medium-sized site often results in "out of memory" failures which can only be addressed by running the CLI indexer with more memory allocated, like this:
php -d memory_limit=256M finder_indexer.phpHowever, this option is not usually available for web-based indexing.
It turns out that the problem is largely due to the tokeniser attempting to reduce execution time by caching all previous tokenisations under 128 bytes long. This PR addresses that issue by limiting the number of cached tokenisations to 100. Note: the number could be made configurable, but I am doubtful of the benefit to be gained so that is not part of this PR.
Hat tip to @mahagr for pointing out the problem with the tokeniser.
Since this might conceivably be of use elsewhere, the fix has been implemented by adding a new storage class to the JCache library. The new JCacheStorageMemory class uses a regular PHP array to store cached items. A maxBuffers option allows you to define the maximum number of items to cache (default 100). If an attempt is made to store a new item when the cache is already full, the least recently used item will be deleted to make room.
Testing Instructions
Find or construct a site with enough content that the CLI indexer fails with an out of memory error. As a rough guide you'll probably need over 1,000 articles to trigger the error. The size of the articles doesn't matter much because tokenisations of strings more than 128 bytes long are not cached and most articles are likely to be longer than that.
Apply the PR and run the indexer again. Hopefully you won't get the out of memory error!
Alternatively, since #12679 and #12680 have now been merged, simply look at the amount of memory used.
I am aware that unit tests need to be added for JCacheStorageMemory and I'd appreciate help in writing them.
Documentation Changes Required
None.