Skip to content

Limit memory used by tokeniser#16621

Closed
chrisdavenport wants to merge 1 commit intojoomla:stagingfrom
chrisdavenport:ss-tokeniser-memory-fix-2
Closed

Limit memory used by tokeniser#16621
chrisdavenport wants to merge 1 commit intojoomla:stagingfrom
chrisdavenport:ss-tokeniser-memory-fix-2

Conversation

@chrisdavenport
Copy link
Copy Markdown
Contributor

Note: This replaces and is the same as #12649 but without the branch merge that I was unable to recover from. It includes the suggestions by @nikosdion such that the new memory cache handler is not listed in global configuration.

Summary of Changes

Running the Smart Search indexer on a medium-sized site often results in "out of memory" failures which can only be addressed by running the CLI indexer with more memory allocated, like this:

php -d memory_limit=256M finder_indexer.php

However, this option is not usually available for web-based indexing.

It turns out that the problem is largely due to the tokeniser attempting to reduce execution time by caching all previous tokenisations under 128 bytes long. This PR addresses that issue by limiting the number of cached tokenisations to 100. Note: the number could be made configurable, but I am doubtful of the benefit to be gained so that is not part of this PR.

Hat tip to @mahagr for pointing out the problem with the tokeniser.

Since this might conceivably be of use elsewhere, the fix has been implemented by adding a new storage class to the JCache library. The new JCacheStorageMemory class uses a regular PHP array to store cached items. A maxBuffers option allows you to define the maximum number of items to cache (default 100). If an attempt is made to store a new item when the cache is already full, the least recently used item will be deleted to make room.

Testing Instructions

Find or construct a site with enough content that the CLI indexer fails with an out of memory error. As a rough guide you'll probably need over 1,000 articles to trigger the error. The size of the articles doesn't matter much because tokenisations of strings more than 128 bytes long are not cached and most articles are likely to be longer than that.

Apply the PR and run the indexer again. Hopefully you won't get the out of memory error!

Alternatively, since #12679 and #12680 have now been merged, simply look at the amount of memory used.

I am aware that unit tests need to be added for JCacheStorageMemory and I'd appreciate help in writing them.

Documentation Changes Required

None.

@ghost ghost added the J3 Issue label Apr 5, 2019
@ghost ghost removed the J3 Issue label Apr 19, 2019
@ghost ghost changed the title [Smart Search] Limit memory used by tokeniser Limit memory used by tokeniser Apr 19, 2019
@Hackwar
Copy link
Copy Markdown
Member

Hackwar commented May 16, 2021

This has since become outdated. Thanks for your work, however the tokeniser has been improved in other ways which hopefully make this obsolete.

@jwaisner
Copy link
Copy Markdown
Member

Closing. Thank you for the work. Please check the tokeniser in latest nightly build.

@jwaisner jwaisner closed this May 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants