I try to avoid a race condition using Infinispan (v 15.1.5) with the Spring Cache Annotations and Spring Boot (v 3.4.3). Two Threads are calling the @Cacheable and @CacheEvict methods at the same time. I am using an Infinispan Local Cache:
private void addLocalCache(EmbeddedCacheManager embeddedCacheManager) {
var configurationBuilder = new ConfigurationBuilder();
configurationBuilder.memory()
.maxCount(10)
.whenFull(EvictionStrategy.REMOVE)
.locking()
.concurrencyLevel(1000)
.lockAcquisitionTimeout(1000, TimeUnit.MILLISECONDS)
.statistics().enabled(true)
.expiration()
.reaperEnabled(false)
.clustering()
.cacheMode(CacheMode.LOCAL);
embeddedCacheManager.defineConfiguration(CACHE_NAME, configurationBuilder.build());
}
That's the simplified version of the code, containing the race condition bug:
@Cacheable(value = CacheConfig.CACHE_NAME, sync = true)
public CacheValue getCacheValue(String key) {
try {
Integer value = calculateValue();
return new CacheValue(value);
} catch (InterruptedException e) {
return null;
}
}
private Integer calculateValue() throws InterruptedException {
Integer value = currentValue;
Thread.sleep(500);
return value;
}
@CacheEvict(value = CacheConfig.CACHE_NAME, key = "#cacheKey")
public void updateData(String cacheKey, int value) {
try {
Thread.sleep(100);
this.currentValue = value;
} catch (InterruptedException e) {
}
}
If one thread is calling getCacheValue and another one is calling updateData, the cache eviction is done while the other Thread is calculating the cacheable object with old data.
So the Cache Object with old data gets into the cache and will stay until another thread calls evict. (Thread.sleep() simulates the required condition so that the evict is done during the calculation.)
The race condition could be avoided, using the computeIfAbsent Method. But Infinipsan is using putIfAbsent in org.infinispan.spring.common.provider.SpringCache:
public <T> T get(Object key, Callable<T> valueLoader) {
ReentrantLock lock;
T value = (T) nativeCache.get(key);
if (value == null) {
lock = synchronousGetLocks.computeIfAbsent(key, k -> new ReentrantLock());
lock.lock();
try {
if ((value = (T) nativeCache.get(key)) == null) {
try {
T newValue = valueLoader.call();
// we can't use computeIfAbsent here since in distributed embedded scenario we would
// send a lambda to other nodes. This is the behavior we want to avoid.
value = (T) nativeCache.putIfAbsent(key, encodeNull(newValue));
if (value == null) {
value = newValue;
}
} catch (Exception e) {
throw ValueRetrievalExceptionResolver.throwValueRetrievalException(key, valueLoader, e);
}
}
} finally {
lock.unlock();
synchronousGetLocks.remove(key);
}
}
return decodeNull(value);
}
Because I don't use a distributed cache the following comment does not really concern me:
we can't use computeIfAbsent here since in distributed embedded scenario we would send a lambda to other nodes. This is the behavior we want to avoid.
But I can't find a simple solution for avoiding the race condtion and using Infinispan with the @Cacheable Annotation.