Skip to content

DefaultRedisCacheWriter potentially leaks lock in rare case? #2598

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
zhiyazw opened this issue Jun 8, 2023 · 4 comments
Closed

DefaultRedisCacheWriter potentially leaks lock in rare case? #2598

zhiyazw opened this issue Jun 8, 2023 · 4 comments
Labels
status: duplicate A duplicate of another issue

Comments

@zhiyazw
Copy link

zhiyazw commented Jun 8, 2023

We're using @Cacheable to annotate methods, and using Redis as the storage.

One day I found one of the @Cacheable method always blocked until timeout.
When I checked the corresponding redis keys, I found there was only one key with "~lock" suffix.

For example, configure the annotation like this:

@Cacheable(cacheNames ="UserCache", //
            key = "T(java.lang.String).format(\"%s|%s|usercache\", #org, #division)", //
            unless = "#result == null")
public UserList getUsers(String org, String division)

In normal, there should be KEYS like this: UserCache_spring|framework|usercache. However, when this issue occurred, all such keys has expired (ttl=24 hours).
There was only one key named UserCache~lock, it should have been there for more than 24 hours.

The possible reason is, inside https://github.com/spring-projects/spring-data-redis/blob/main/src/main/java/org/springframework/data/redis/cache/DefaultRedisCacheWriter.java#putIfAbsent, in a very rare condition, a writing thread crashes between the line the lock is created and the line entering try block, then it doesn't get guarantee to be released, so cause a lock leak problem.

A link of this issue in gitter: https://matrix.to/#/!MJtHKfEduxwvhntKyg:gitter.im/$BJCPcSX_PqDSf2wQpcz_LYNl4ixJCTNkkk2V6Un3kRg?via=gitter.im&via=matrix.org

@spring-projects-issues spring-projects-issues added the status: waiting-for-triage An issue we've not yet triaged label Jun 8, 2023
@mp911de
Copy link
Member

mp911de commented Jun 8, 2023

We merged just yesterday support for Lock TTL via #2597. Care to check out the snapshots to see whether the change helps in your case?

@mp911de mp911de added the status: waiting-for-feedback We need additional information before we can continue label Jun 8, 2023
@zhiyazw
Copy link
Author

zhiyazw commented Jun 9, 2023

Lock TTL is a good idea.
I had a look at that PR, however I think by default the lock still lives forever because it passes in TtlFuntion.persitent() by default. Pls correct me.

@spring-projects-issues spring-projects-issues added status: feedback-provided Feedback has been provided and removed status: waiting-for-feedback We need additional information before we can continue labels Jun 9, 2023
@mp911de
Copy link
Member

mp911de commented Jun 12, 2023

As we provide a framework functionality, we cannot make assumptions over an ideal time to live. It's a concern of your application configuration to set an appropriate time to live.

@mp911de mp911de closed this as completed Jun 12, 2023
@mp911de mp911de added status: duplicate A duplicate of another issue and removed status: waiting-for-triage An issue we've not yet triaged status: feedback-provided Feedback has been provided labels Jun 12, 2023
@zhiyazw
Copy link
Author

zhiyazw commented Jun 13, 2023

Hi Mark, the way I can figure out for an application developer to specify a lock ttl is to pass a lockTtlFunction in the variant with four parameters of RedisCacheWriter.lockingRedisCacheWriter, as well as passing the sleepTime and batchStrategy.
I'll prefer to a default lock ttl since the lock itself is hidden to an application developer, and I don't have to care about sleepTime and batchStrategy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: duplicate A duplicate of another issue
Projects
None yet
Development

No branches or pull requests

3 participants