Fix bug that could cause a /sync to tightloop with sqlite after restart#16540
Fix bug that could cause a /sync to tightloop with sqlite after restart#16540erikjohnston merged 2 commits intodevelopfrom
/sync to tightloop with sqlite after restart#16540Conversation
| extra_tables=[ | ||
| ("account_data", "stream_id"), | ||
| ("room_tags_revisions", "stream_id"), | ||
| ], |
There was a problem hiding this comment.
Should this include room_account_data too, like the postgres ID generator?
There was a problem hiding this comment.
That's included above. (These are extra_tables= rather than tables= that MultiWriterIdGenerator uses)
There was a problem hiding this comment.
(This feels like something a lint could check 😢 )
|
Fixes #15824? |
Quite possibly |
|
Didn't fix #15824 Still seeing with the following Note the below sync query returns a response that does not advance the next_batch: http://localhost:8008/_matrix/client/r0/sync?filter=0&timeout=30000&since=s93_7_0_1_5_1_1_11_0_1 Restarting synapse fixes the tightloop temporarily but it returns within a few minutes. Our reproduction steps: SQLite database docker / docker compose deployment |
|
Hmm, I wonder if we have a similar problem with device lists then |
This could happen if the last rows in the account data stream were inserted into
account_data. After a restart the max account ID would be calculated without looking at theaccount_datatable, and so have an old ID.