Skip to content

Improve extending#149

Merged
hjr265 merged 4 commits intogo-redsync:masterfrom
palcalde:pablo/add-set-when-extending
Feb 17, 2024
Merged

Improve extending#149
hjr265 merged 4 commits intogo-redsync:masterfrom
palcalde:pablo/add-set-when-extending

Conversation

@palcalde
Copy link
Contributor

@palcalde palcalde commented Feb 9, 2024

Hello!

We are heavy users of this library and it works great, however, there is a small improvement that would alleviate some pain in our case. Let me explain it:

  • we have 3 redis in the pool, with no persistence.
  • we extend the lock every short period of time, with the idea of rarely losing it. Due to the importance of our use case, it's critical for us to try to keep the lock in the same instance as much as possible.

We realized we are losing the lock too often, and the reason is this:
If one of them gets restarted or unavailable for some time (it happens), our service locks in two redises instead of 3 since it's enough for the quorum. That's expected, but the problem is that from now on we are extending with 2 redises instead of 3 (we can't extend the one that got restarted cause it's empty, no key there). It's now a matter of time until one of the two we use to extend gets restarted as well, we then lose all locks in that redis, as all operations to extend fail because there is only one instance with keys to be extended, the other two don't have them.

This MR fixes that by trying to "extend or lock" instead of just "extend". This way if one of the redis restarts we'll sync up quickly the keys and all of them will be operative for extending. The only way to lose a lock now is if two redises get restarted at the same time, which is very unlikely.

Let me know if this makes sense.

Thank you!

@palcalde palcalde force-pushed the pablo/add-set-when-extending branch from 5f8a4a6 to c9af502 Compare February 9, 2024 14:01
@palcalde palcalde force-pushed the pablo/add-set-when-extending branch from c359531 to dc7ca9e Compare February 9, 2024 15:53
@hjr265
Copy link
Member

hjr265 commented Feb 12, 2024

Hello @palcalde, sorry for taking a while to get back to you.

I understand your use case and see that what you are proposing is a valid option.

However, this means by default the lock behaves a little different to what the algorithm described here: https://redis.io/docs/manual/patterns/distributed-locks/#making-the-algorithm-more-reliable-extending-the-lock

Would you find it reasonable that this behavior is made available through an option on the NewMutex function?

rs.NewMutex("dont-spin", WithSetNXOnExtend(true))

That way Redsync will continue to behave how it does now by default and stay in line with the algorithm described on redis.io. But you can use this new option to make it work with your setup.

@palcalde
Copy link
Contributor Author

palcalde commented Feb 13, 2024

Would you find it reasonable that this behavior is made available through an option on the NewMutex function?

rs.NewMutex("dont-spin", WithSetNXOnExtend(true))

That way Redsync will continue to behave how it does now by default and stay in line with the algorithm described on redis.io. But you can use this new option to make it work with your setup.

Yes, makes sense to have it as an option. I'll make the changes required and let you know. Thank you!

@palcalde
Copy link
Contributor Author

Changes done. Now is configured by the option SetNXOnExtend. I've also added a couple of tests. Let me know if this works for you.

@hjr265 hjr265 merged commit c541100 into go-redsync:master Feb 17, 2024
@hjr265
Copy link
Member

hjr265 commented Feb 17, 2024

@palcalde Thank you!

@palcalde
Copy link
Contributor Author

Thank you! Do you mind doing a release? I think upgrading a minor should be fine, there is no breaking change.

@hjr265
Copy link
Member

hjr265 commented Feb 20, 2024

@palcalde That should be done already...

https://github.com/go-redsync/redsync/tree/v4.12.0

Let me know if you are facing any issue with it.

@palcalde
Copy link
Contributor Author

Thank you for the quick response. However, I had to do a fix in #151. One of my changes caused the extend to release the lock even if there was quorum for it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants