I'm using the concurrent_lru_cache and generally like it, but it appears that there is no possibility for the callback function to fail: operator allocates a new bucket and calls the function that produce the value if is_new_value_needed() returns true.
So far, so good, but what if the function fails? It cannot throw an exception as any other thread waiting on that cache slot will now spin forever.
The only option when the function fails is to return an invalid value (by convention) and have the users of the cache check for that value. In my case, I store pointers in the cache, so storing a nullptr when the function fails makes sense.
Still, when this occur, the cache slot is now polluted with that invalid value, and there is no way for the caller to retry as it will get the same value over and over (until more usage on the cache eventually causes that particular slot to be discarded but that's not the point).
I think one simple way to work around this would be to allow to forcibly discard a cache entry, something like .discard(k) that would free up the slot and allow retries.
An alternate design could allow the callback to throw exceptions that would bubble up to the thread(s) waiting on that cache slot (and leave the slot empty), or yet another design would use a container-like interface like .push_front(k, val) and bool .lookup(k, &val) and let the caller manage the item's creations entirely.
Additionally, a Boolean .hit() method on the handle_object, returning true on cache hits, would be useful, for example to produce hit ratio stats helping to understand the cache efficiency and pick an optimal cache size - this one is easy and I managed to implement it.
Thoughts? I see the LRU cache has been in preview for a long time, what are the plans for its evolution?