• Resolved irameshdev

    (@irameshdev)


    Hello Team,

    We tried indexing the records of 4.6k+ articles but the process failed stating 524 timeout error. How can we index such large records? Does the pro version of plugin allow splitting the indexing into smaller parts? If yes then we are willing to purchase the pro version.

    Please let us know.

    Regards

    • This topic was modified 2 years, 2 months ago by irameshdev.
Viewing 4 replies - 1 through 4 (of 4 total)
  • Plugin Contributor Michael Beckwith

    (@tw2113)

    The BenchPresser

    Hi @irameshdev,

    Our Algolia Pro extension doesn’t amend anything in regards to the actual indexing of content, but can change details about what’s indexed like WooCommerce details or whether or not content would be indexed.

    That said, getting back on topic. Timeout issues has never really been the easiest thing to handle at times as it does largely depend on the server doing the work.

    According to https://www.lifewire.com/error-524-a-timeout-occurred-4782741 a “524 timeout” is one based on Cloudflare integration as well. Are you able to confirm that the site in question is using Cloudflare?

    Based on my past support ticket experiences, reducing the amount of records processed at a time hasn’t typically aided, but we can still try if you prefer. I believe we batch things up at increments of 100 at the moment. The filter below would reduce that to 50.

    function irameshdev_modify_wpswa_batch_size( $size ) {
        // We default to 100 records at a time. This reduces to 50.
        return 50;
    }
    add_filter( 'algolia_indexing_batch_size', 'irameshdev_modify_wpswa_batch_size' );

    I know that the UI version of batch processing does do what it can to rely on AJAX requests and I believe it makes new requests for every batch, as opposed to one long AJAX request.

    I also know that we have some WP-CLI integration if you want to try that, which has much less reliance on say the WP admin being loaded as server resources, and could hopefully allow for more efficient resource usage for the server. However, this is also one that’s typically not aided as well as we’d prefer regarding for sure avoiding timeouts, but it is definitely useful for bash scripting and cron job scheduling that could be done during slower times for site traffic.

    More information regarding our WP-CLI integration https://github.com/WebDevStudios/wp-search-with-algolia/wiki/WP-CLI

    Plugin Contributor Michael Beckwith

    (@tw2113)

    The BenchPresser

    Hi @irameshdev

    I have been doing some digging in the past recent times, especially close to when you started this thread, and I added some documentation around our usage of cURL client and the fact that we have some filters available to set cURL params. The filter documented below on our Wiki is used to set curl_setopt() values and you can pass in any that may need adjusted. The wiki specifically shows a quick example of how to use it with CURLOPT_TIMEOUT

    https://github.com/WebDevStudios/wp-search-with-algolia/wiki/Timeouts

    May be of use to you, if you’re still experiencing timeout issues.

    Thread Starter irameshdev

    (@irameshdev)

    Hi Macheal,

    Thank you for your replies. Due to the busy schedules I couldn’t reply earlier.

    By reducing the batch size and increasing the server timeout value, it fixed the problem.

    Thank you for your support.

    Warm Regards

    Plugin Contributor Michael Beckwith

    (@tw2113)

    The BenchPresser

    Awesome to hear overall, as timeouts have been a frustration for me on the support front. That said, I’m still curious if the information from my last reply could help out in your situation or others who may come across the thread. That said, I also don’t expect you to break a working version either, especially if you’re very actively processing a lot of content to your indexes.

Viewing 4 replies - 1 through 4 (of 4 total)

The topic ‘Large records indexing server timeout issue’ is closed to new replies.