Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v2 contracts #74

Open
maypok86 opened this issue Mar 20, 2024 · 43 comments
Open

v2 contracts #74

maypok86 opened this issue Mar 20, 2024 · 43 comments
Labels
enhancement New feature or request v2

Comments

@maypok86
Copy link
Owner

Just random thoughts about new contracts...

@maypok86 maypok86 added enhancement New feature or request v2 labels Mar 20, 2024
@maypok86
Copy link
Owner Author

The approximate api of the builder.

type Builder[K comparable, V any] interface {
	// It seems better to separate cost and size after all.
	// In the experience of ristretto and communication with otter users,
	// people do not perceive specifying this with one parameter well.
	//
	// Also, with this api, we will be able to prohibit the simultaneous use of the MaximumSize and Cost functions.
	MaximumSize(maximumSize int) Builder[K, V]
	MaximumCost(maximumCost int) Builder[K, V]
	CollectStats() Builder[K, V]
	InitialCapacity(initialCapacity int) Builder[K, V]
	Cost(costFunc func(key K, value V) uint32) Builder[K, V]
	DeletionListener(deletionListener func(key K, value V, cause DeletionCause)) Builder[K, V]
	// We are slightly reducing the cognitive load on the user.
	TTL(ttl time.Duration) Builder[K, V]
	// It seems that the use of a variable ttl is almost never required and
	// the user should have at least some rule by which he wants to calculate the ttl.
	//
	// In principle, Set(key, value, ttl) can be transferred to Extension.
	//
	// It also becomes more difficult to use it when it is not needed. :)
	VarTTL(ttlFunc func(key K, value V) uint32) Builder[K, V]
	Build() (Cache[K, V], error)
}

func NewBuilder[K comparable, V any]() Builder[K, V] {
	...
}

@maypok86
Copy link
Owner Author

type Cache[K comparable, V any] interface {
	Has(key K) bool
	Get(key K) (V, bool)
	Set(key K, value V) bool
	// the existing value?
	// I also want to return the previous value, but then how to take into account the rejection due to the size?
	SetIfAbsent(key K, value V) bool
	Delete(key K)
	DeleteByFunc(f func(key K, value V) bool)
	Range(f func(key K, value V) bool)
	Clear()
	Close()
	Size() int
	Stats() Stats
	Extension() Extension[K, V]
}

@ben-manes
Copy link

I'd vote for changing terms from cost to weight for clarity, so it becomes MaximumWeight and Weigher(func) in the builder snippet above. (I just think cost is a more confusing term as be confused with the latency / miss penalty to fetch the item)

TTL is pretty well known term, but what "live" means depends on the implementation. Some use "time since creation" whereas others use "time since last write". Or sometime people will say TTL as the time for the client receiving it, like an http expiry duration header. So I think the acronym could be misinterpreted vs a more generic term like expiration.

Close() is there state requiring a lifecycle termination? That can be fair, just more error prone in my experience when users think of an on-heap cache and discard by GC like any other Map.

@maypok86
Copy link
Owner Author

It may be worth stopping returning bool in case of rejections. Then the API will become a little simpler. Still, otter guarantees that it can reject an item only because of the eviction policy, and not as a ristretto because of contention.

type Cache[K comparable, V any] interface {
	Set(key K, value V)
	SetIfAbsent(key K, value V) (V, bool)
}

@maypok86
Copy link
Owner Author

I'd vote for changing terms from cost to weight for clarity

It doesn't really matter to me here. It can be replaced with weight.

TTL is pretty well known term, but what "live" means depends on the implementation.

Oh, that's the truth. I have not yet seen any implementation on Go that uses "expire after last access". Except, goburrow, but it explicitly separates this from "expire after create/last write".

But if we separate the different versions by

  1. ExpireAfterCreate
  2. ExpireAfterWrite
  3. ExpireAfterAccess

I need to answer one question. Can these functions be used for a single cache instance? It seems to me that this is absolutely useless, but if there is some kind of use case, then there will be a very unpleasant problem with the expiration policy with a variable ttl. And in fact, a similar problem is very likely to be in LoadingCache, but with Loader.

type Builder[K comparable, V any] interface {
	ExpireAfterCreate(duration time.Duration) Builder[K, V]
	ExpireAfterWrite(duration time.Duration) Builder[K, V]
	ExpireAfterAccess(duration time.Duration) Builder[K, V]
	ExpireAfter(/* what??? */) Builder[K, V]
}

It seems that there is a need to use an interface with several methods. But

  1. What if one function is needed? Okay, we can use a structure that implements this interface and make a constructor for it.
  2. And if there are two or three? Making implementations for all variants does not look like a good solution, also it is impossible to create an anonymous structure with methods in Go and in the end it will look terrible.

Ideally, we would avoid using the same interface for all three functions.

Could it be something like that?

type Builder[K comparable, V any] interface {
	ExpireAfter(opts ...ExpiryOption[K, V]) Builder[K, V]
}

type ExpiryOption[K comparable, V any] func(*expiry[K, V])

But this again looks doubtful.

Close() is there state requiring a lifecycle termination?

Yes, goroutine is used for the worker.

In general, the use of Close can be avoided using runtime.SetFinalizer. I was thinking about it once and decided that Close is a little better. But it has one big drawback. The user can just forget about it. So perhaps it would be more friendly to just automatically clean up resources. Moreover, it seems that a manual Close call is really required only when you need to understand whether an object is closed or not. Cache doesn't seem to need it.

@ben-manes
Copy link

ben-manes commented Mar 21, 2024

Can these functions be used for a single cache instance?

They shouldn't be. Guava/Caffeine allow it prior to variable support and naivety as it seemed to have little cost, but I'd stop allowing multiple expiration policies if possible. Usually users seem to set both expireAfterAccess and expireAfterWrite to the same fixed value for zero benefit.

It seems that there is a need to use an interface with several methods.

Right, I have an expiry interface with create, update, read methods.

Ideally it would be on the builder for easiest discovery where all of the options funnel into variable expiration. You can see the discussion in ben-manes/caffeine#1499, as users often made mistakes with the interface and of course it was pretty verbose.

If I could start anew then I'd make it convenient and fail at runtime if multiple where selected

Caffeine.newBuilder() // user can chose one 
    .expireAfterCreate(Duration.ofMinutes(5))
    .expireAfterCreate((K key, V value) -> duration)
    .expireAfterWrite(Duration.ofMinutes(5))
    .expireAfterWrite((K key, V value) -> duration)
    .expireAfterAccess(Duration.ofMinutes(5))
    .expireAfterAccess((K key, V value) -> duration)
    .expireAfter(new Expiry...)

For backwards compatibility I went with static factories on the interface so they can do one of,

.expireAfter(Expiry.creating((Key key, Graph graph) -> duration))
.expireAfter(Expiry.writing((Key key, Graph graph) -> duration))
.expireAfter(Expiry.accessing((Key key, Graph graph) -> duration))

They can define their own Expiry as needed, but the create / write / access functions should cover most cases and be less error prone. The existing fixed policies, expireAfterWrite and expireAfterAccess, remain as is using the non-timerwheel logic where both can unfortunately be selected.

Yes, goroutine is used for the worker.

Does that need to be long-lived? I use a state machine to schedule the maintenance worker so only one should be in-flight. This submits a task to an Executor, so users can chose between OS threads, virtual threads, or run it on the calling thread. Is there a good reason to not create a new goroutine each time the maintenance is scheduled instead of reusing it?

I'd avoid finalizers, as at least in Java they are considered a misfeature and are being deprecated for removal. However java offers other options like weak and phantom references that cover the clean up needs better, without penalizing the GC or quirks like object resurrection. Maybe they're cleaner in Go.

@maypok86
Copy link
Owner Author

They shouldn't be.

Great, then this simplifies the situation a bit.

.expireAfterCreate(Duration.ofMinutes(5))
.expireAfterCreate((K key, V value) -> duration)

There is a small problem. There is no method overloading in Go and it even makes sense. If we can do without ExpireAfter, then ideally we would come up with a good name for ExpireAfter* with a lambda.

What about this?

  1. VarExpireAfter*
  2. ExpireAfter*Func

Does that need to be long-lived?

Actually, no, that's largely why I asked about removing goroutines, but most likely I didn't formulate it very clearly.

It would be cool to abandon this, but so far I don't understand the benefits for users from this, unlike adding LoadingCache and increasing the hit ratio. Also, at the moment, a ticker in the background is being used to get the time faster. It also needs to be stopped.

In fact, finalizers are incredibly often used when working with cgo. Moreover, there are simply no other good ways to clean up resources other than finalizers and Close in Go.

@maypok86
Copy link
Owner Author

Yeah, LoadingCache is really complicated.

type LoadingCache[K comparable, V any] interface {
	...
	Get(ctx context.Context, key K) (V, error)
	BulkGet(ctx context.Context, keys []K) (map[K]V, error)
	// extension?
	Extension() LoadingExtension[K, V]
}

type LoadingExtension[K comparable, V any] interface {
	GetQuietly(ctx context.Context, key K) (V, error)
	GetEntry(ctx context.Context, key K) (Entry[K, V], error)
	GetEntryQuietly(ctx context.Context, key K) (Entry[K, V], error)
}

// :(

type SingleLoader[K comparable, V any] interface {
	Load(ctx context.Context, key K) (V, error)
}

type BulkLoader[K comparable, V any] interface {
	BulkLoad(ctx context.Context, keys []K) (map[K]V, error)
}

type SingleBulkLoader[K comparable, V any] interface {
	SingleLoader[K, V]
	BulkLoader[K, V]
}

type Builder[K comparable, V any] interface {
	// and we are trying to cast to the BulkLoader inside,
	// otherwise we use the standard implementation of BulkLoad.
	BuildLoading(loader SingleLoader[K, V]) (LoadingCache[K, V], error)
}

@ben-manes
Copy link

ExpireAfter*Func

This seems more obvious, not sure about Go idioms if that is a common naming style.

LoadingCache is really complicated.

Unfortunately Go api design is outside my expertise. I was able to use Java features to layer it in a way to reduce user complexity, e.g. using default methods. The context wasn't needed since Java typically is paired with dependency injection with request-scoped providers, e.g. to get the logged in user bound to that thread. If not, then the user likely will use a loading get(key, func) instead of a LoadingCache where they'll only miss out on the refreshAfterWrite feature. I don't know if that context param makes automatic refreshing incompatible with Go idioms. If so, then you might prefer instead adding a loading get and getAll to the Cache interface?

@maypok86
Copy link
Owner Author

maypok86 commented Mar 21, 2024

The main problem is that context.Context is often used for the following things:

  1. Timeout
  2. Storage of various meta-information. Fields for logging and tracing, etc.

Since it changes from request to request, it should also be used in loader.

And the idea of casting the interface is actually often used in the standard library. For example, here.

I need to think about it, but at least the loss of timeouts, spans and information in the logs look very critical.

@ben-manes
Copy link

ben-manes commented Mar 22, 2024

It sounds like LoadingCache might not be a good fit for now, but putting loading methods on the Cache interface would work nicely. If later the LoadingCache is helpful then you can offer that as an API improvement without much new functionality. From gpt it might look like,

func GetIfPresent(key K) (V, bool)
func Get(ctx context.Context, key K, mappingFunction func(K) V) (V, error)
func GetAll(ctx context.Context, keys []K, mappingFunction func([]K) map[K]V) (map[K]V, error)

@proost
Copy link

proost commented Mar 22, 2024

if Extension method and LoadingCache methods are coexist on the Cache interface, bit confused to me. Because why GetQuietly, GetEntry, GetEntryQuitely methods can't be on the Cache interface. I think, Loader as a parameter in Builder method is not bad idea. Some library alread did.

@maypok86
Copy link
Owner Author

maypok86 commented Mar 22, 2024

To be honest, I didn't really understand :(.

Why do I need to specify the loader every time I use the method?

type (
	LoaderFunc[K comparable, V any]     func(ctx context.Context, key K) (V, error)
	BulkLoaderFunc[K comparable, V any] func(ctx context.Context, keys []K) (map[K]V, error)
)

type Cache[K comparable, V any] interface {
	Has(key K) bool
	Get(key K) (V, bool)
	BulkGet(keys []K) (map[K]V, bool)
	// We are trying to use the loader passed when creating the cache instance,
	// if nothing was passed to Load.
	// But what if the loader has not been passed anywhere?
	Load(ctx context.Context, key K, loader ...LoaderFunc[K, V]) (V, error)
	BulkLoad(ctx context.Context, keys []K, bulkLoader ...BulkLoaderFunc[K, V]) (map[K]V, error)
}

Or we can even use a more canonical `Option'. Unfortunately, this is one allocation per request.

Load(ctx context.Context, key K, opts ...LoadOption[K, V]) (V, error)
BulkLoad(ctx context.Context, keys []K, opts ...LoadOption[K, V]) (map[K]V, error)

But what to do with the methods in the Extension is not very clear. It seems they don't need to have a Load version. And we can leave everything as it is.

It is also interesting what needs to be done to allow the use of explicit ttl. We can hide this in the Extension, the problem is that when creating a cache, we need to know that a individual ttl will be used for each entry. Of course, we can ask to specify ExpireAfter*Func, but this is not intuitive at all.

@proost for me, the methods in Extension allow bypassing the standard cache API (getting a full entry, getting data without updating statistics and eviction policy, explicitly passing ttl).

For the user, Load only automatically downloads data from an external source if nothing is found.

Although with the integration of the API, the line is really blurred.

@maypok86
Copy link
Owner Author

Also, if we use Get and Load in the same type, it may be better to use a more common Option when creating a cache instead of a builder. The only thing I don't like about options is the requirement to specify useless generic types.

WithMaximumSize[K, V](1000)

I used the builder to create a more typed cache and add LoadingCache more easily, but it turned out to be difficult to add new features :( (you need to add the same function in a lot of places).

@rueian
Copy link

rueian commented Mar 25, 2024

It may be worth stopping returning bool in case of rejections. Then the API will become a little simpler. Still, otter guarantees that it can reject an item only because of the eviction policy.

When I was trying to make SetIfAbsent(key K, value V) return (V, bool), the rejection of exceeding the MaxAvailableCost blocked me. There is also a potential issue related to the rejection if users want to implement their own loading cache like me.

In terms of the API SetIfAbsent(key K, value V) (V, bool), I think users would only expect the following two return cases:

  1. (newly setted V, true)
  2. (existing V, false)

That is, if the second returned value is false, users will expect the first returned value to be the V already in the cache. However, in case of rejection, there can be no such V in the cache and users have no way to tell.

If we want to keep the rejection behavior, I think a slightly complicated API is needed, such as SetIfAbsent(key K, value V) (V, int) where some specific int can be used to indicate rejection.

@rueian
Copy link

rueian commented Mar 25, 2024

To be honest, I didn't really understand :(.

Why do I need to specify the loader every time I use the method?

I guess, it has something to do with developer experience. Many times, it is hard to prepare the loader function at initialization. Or the loader function may need some conditional dependencies that can't be determined at initialization.

@ben-manes
Copy link

That is, if the second returned value is false, users will expect the first returned value to be the V already in the cache. However, in case of rejection, there can be no such V in the cache and users have no way to tell.

Since the entry hangs around until GC'd anyway, it didn't seem too awful to me to handle the rejection within the eviction policy logic instead of at the map insertion. The add / update tasks shuffle an entry that is too large for immediate eviction to avoid flushing the cache. Then it is handled however the entry was added (load, put, replace), its linearizable, and simple to understand. I think the only immediate rejection that I do is for AsyncCache when inserting a completed and failed future value, but that's quite minor since it has the callback handler otherwise if it fails after insert.

@maypok86
Copy link
Owner Author

In terms of the API SetIfAbsent(key K, value V) (V, bool), I think users would only expect the following two return cases:

@rueian Yes, it seems like a good solution.

I guess, it has something to do with developer experience. Many times, it is hard to prepare the loader function at initialization. Or the loader function may need some conditional dependencies that can't be determined at initialization.

This is true, but I was referring specifically to the requirement of specifying a loader.

That is, instead of

Load(ctx context.Context, key K, loader LoaderFunc[K, V]) (V, error)

it seems better to optionally handle loader

Load(ctx context.Context, key K, loader ...LoaderFunc[K, V]) (V, error)

But it's very likely a special effect from ChatGPT. But something bothers me much more. What should we do if loader was not passed either in the builder and explicitly in the function? It seems easiest to just return some kind of predefined error.

@maypok86
Copy link
Owner Author

About LoadingCache

I've been thinking about LoadingCache for a long time and here's a little observation.

type LoadingCache[K comparable, V any] interface {
	...
	Get(ctx context.Context, key K) (V, error)
	BulkGet(ctx context.Context, keys []K) (map[K]V, error)
	// extension?
	Extension() LoadingExtension[K, V]
}

type LoadingExtension[K comparable, V any] interface {
	GetQuietly(ctx context.Context, key K) (V, error)
	GetEntry(ctx context.Context, key K) (Entry[K, V], error)
	GetEntryQuietly(ctx context.Context, key K) (Entry[K, V], error)
}

For me, the main problem with the API that originally came to my mind is LoadingExtension, but it looks like it's just not needed. GetQuietly/GetEntryQuietly are just getting from the hash table, they don't need a loader. Loader is needed exclusively for GetEntry, but in principle it can be removed. I added it only as a small syntactic sugar. Other methods in Extension are very unlikely to require using loader internally.

Separate LoadingCache vs Cache with Load methods

Perhaps the main dilemma.

It seems that both options have their pros and cons.

LoadingCache

Pros:

  • A more general architecture
  • It's easier to add AsyncCache (Promise/Future style)

Cons:

  • It is unclear how to create it if the user has not passed the loader in the builder (Do we really need an explicit loader in the Load methods?). Of course, we can create it, but then the user will be required to pass the loader in each request.
  • It can become difficult to maintain and use when adding features.

In fact, I've never seen anything like Promise/Future used in Go (except for the very rare return of channels), but it would be cool to be able to do something like that. Only I also haven't seen any issue about it in other caches, but okay.

Cache with Load methods

Pros:

  • Easy to implement and maintain

Cons:

  • It is unclear how to add AsyncCache

Options

Could it really be possible to add options to various methods instead of explicitly passing loader (it will also be possible to add additional features with ease)? This is a canonical method that is used in many projects and is well known in Go. Also, additional allocation for options can be avoided using sync.Pool.

type Option[K comparable, V any] func(*options[K, V])

type Cache[K comparable, V any] interface {
    Get(key K, opts ...Option[K, V]) (V, ok)
    Load(ctx context.Context, key K, opts ...Option[K, V]) (V, error)
}

Explicit TTL

It is unclear what to do with this...

What is the best way to allow explicit ttl setting for each entry? The main problem is that otter needs to know when creating that the ttl for each entry may be different, but the user still does not have a function for calculating it. We can add this to the Extension or just take it from the options, but how should the user create a cache in this case?

Builder vs Options

In Go, people use options more often than builders, although I don't like how generic options look. How do you like these two alternatives?

cache, err := otter.NewCache[int, int](
	WithMaximumSize[int, int](1000),
	WithExpireAfterWrite[int, int](time.Hour),
	WithCollectStats[int, int](),
)

vs

cache, err := otter.NewBuilder[int, int]().
	MaximumSize(1000).
	ExpireAfterWrite(time.Hour).
	CollectStats().
	Build()

I still think the builder is a little more beautiful.

@maypok86
Copy link
Owner Author

In the case of LoadingCache, I'm still leaning towards something like this.

cache, err := otter.NewBuilder[int, int]().
	MaximumSize(1000).
        ExpireAfterWrite(time.Hour).
	CollectStats().
	Loader(func(ctx context.Context, key K) (V, error) {
		...
        }).
	BulkLoader(func(ctx context.Context, keys []K) (map[K]V, error) {
		...
        }).
	BuildLoading()

@ben-manes
Copy link

ben-manes commented Mar 28, 2024

JCache is an alternative standardized Java api which I don't like, but might be a helpful reference. They have a more powerful loader called EntryProcessor which computes around the entry for CRUD and metadata changes. Here's a tutorial from one of the implementers. They allow for the cache to be built with a loader and writer, so the api is just get and put methods. It makes the caller not know if the data was fetched (null means absent locally or in system of record?). They also view a cache as very distinct from a map, whereas I think devs know their core collection interfaces very well so its intuitive to extend that concept. Anyway, API design is all opinion and maybe you'll prefer theirs.

@maypok86
Copy link
Owner Author

maypok86 commented Apr 1, 2024

JCache is an alternative standardized Java api

To be honest, it looks overloaded.

Actually, I quite like the alternative with options, but I'm still not completely sure about this decision and I want to hear someone else's opinion.

Of course, this approach has drawbacks, but in my opinion it looks quite good. The only thing that bothers me a little is the use of sync.Pool for Get operations, but

  1. The use of options is very rarely required.
  2. There is often a fastpath.
  3. sync.Pool is a thread-local storage after all. There should not be a significant decrease in throughput (there should definitely be more than 75+ million qps, but this should be checked).

Option

type GetOption[K comparable, V any] func(*getOptions[K, V])

func WithRecordStats[K comparable, V any](recordStats bool) GetOption[K, V] {
	return func(o *getOptions[K, V]) {
		o.recordStats = recordStats
	}
}

func WithQuietly[K comparable, V any]() GetOption[K, V] {
	return func(o *getOptions[K, V]) {
		o.isQuietly = true
	}
}

func WithLoader[K comparable, V any](loader func(ctx context.Context, key K) (V, error)) GetOption[K, V] {
	return func(o *getOptions) {
		o.loader = loader
	}
}

type SetOption[K comparable, V any] func(*setOptions[K, V])

func WithTTL[K comparable, V any](duration time.Duration) SetOption[K, V] {
	return func(so *setOptions[K, V]) {
		so.ttl = duration
	}
}

Cache

type basicCache[K comparable, V any] interface {
	Set(key K, value V, opts ...SetOption[K, V])
	SetIfAbsent(key K, value V, opts ...SetOption[K, V]) (V, bool)
	Delete(key K)
	DeleteByFunc(f func(key K, value V) bool)
	Range(f func(key K, value V) bool)
	Clear()
	Size() int
	Stats() Stats
}

type Cache[K comparable, V any] interface {
	basicCache[K, V]
	Has(key K, opts ...GetOption[K, V]) bool
	Get(key K, opts ...GetOption[K, V]) (V, bool)
	GetEntry(key K, opts ...GetOption[K, V]) (Entry[K, V], bool)
	BulkGet(keys []K, opts ...GetOption[K, V]) (map[K]V, bool)
}

type LoadingCache[K comparable, V any] interface {
	basicCache[K, V]
	Get(ctx context.Context, key K, opts ...GetOption[K, V]) (V, error)
	GetEntry(ctx context.Context, key K, opts ...GetOption[K, V]) (Entry[K, V], error)
	BulkGet(ctx context.Context, key K, opts ...GetOption[K, V]) (map[K]V, error)
}

Builder

type Builder[K comparable, V any] interface {
	MaximumSize(maximumSize int) Builder[K, V]
	MaximumWeight(maximumWeight int) Builder[K, V]
	CollectStats() Builder[K, V]
	InitialCapacity(initialCapacity int) Builder[K, V]
	Weigher(weigher func(key K, value V) uint32) Builder[K, V]
	DeletionListener(deletionListener func(key K, value V, cause DeletionCause)) Builder[K, V]
	ExpireAfterCreate(duration time.Duration) Builder[K, V]
	ExplicitExpireAfterCreate() Builder[K, V]
	ExpireAfterWrite(duration time.Duration) Builder[K, V]
	ExplicitExpireAfterWrite() Builder[K, V]
	ExpireAfterAccess(duration time.Duration) Builder[K, V]
	ExplicitExpireAfterAccess() Builder[K, V]
	Loader(loader func(ctx context.Context, key K) (V, error)) Builder[K, V]
	BulkLoader(bulkLoader func(ctx context.Context, keys []K) (map[K]V, error)) Builder[K, V]
	Build() (Cache[K, V], error)
	BuildLoading() (LoadingCache[K, V], error)
}

Examples

// get quietly
value, ok := cache.Get(key, otter.WithQuietly[K, V]())

// explicit ttl
cache, err := otter.NewBuilder[int, int]().
	MaximumSize(1000).
	ExplicitExpireAfterWrite().
	CollectStats().
	Build()
...
cache.Set(key, value, otter.WithTTL[int, int](time.Hour))

// loader
loader := ...

cache.Get(ctx, key, otter.WithLoader(loader))

@rueian
Copy link

rueian commented Apr 4, 2024

The main problem is that otter needs to know when creating that the ttl for each entry may be different, but the user still does not have a function for calculating it. We can add this to the Extension or just take it from the options

How about letting the loader return TTL as well? I think, in many cases, TTL can only be known when loading the entry.

Loader(loader func(ctx context.Context, key K) (V, time.Duration, error)) Builder[K, V]
BulkLoader(bulkLoader func(ctx context.Context, keys []K) (map[K]WithTTL[V], error)) Builder[K, V]

@maypok86
Copy link
Owner Author

maypok86 commented Apr 4, 2024

Oh, I can hardly imagine cases in which the ttl is not known in advance. It seems that in such cases, network requests are not needed and such a function in the builder is enough.

type Builder[K comparable, V any] interface {
    ExpireAfterCreateFunc(f func(key K, value V) time.Duration) Builder[K, V]
}

And when the ttl is returned in the loader, it is unlikely that the user will understand what needs to be returned as ttl there if it is not used.

@maypok86
Copy link
Owner Author

maypok86 commented Apr 9, 2024

Okay, it seems that such an api should be good enough. So I can start developing.)

@element-of-surprise
Copy link

Was just reading this thread and saw that you want to go with "Go options" style, but don't want to pay the cost in allocations.

You can allocate the options on the stack in basically the same manner and save the allocation. Might fit your use case, might not, but threw up an example for you:
https://go.dev/play/p/_CuzLs19qR-

Happy coding.

@maypok86
Copy link
Owner Author

maypok86 commented Apr 20, 2024

Haha, it really works. I tried to allocate on the stack and pass a pointer, but it leaked onto the heap and I didn't try to fix the escape analysis decision somehow.

Thank you very much! I have almost no free time for otter right now, but I hope there will be more of it later and I will release v2 anyway. :(

@element-of-surprise
Copy link

Trick I came up a while ago. Was working on something where I spent so much time getting rid of allocations, allocating the options seemed a bad idea. And I get the time thing, have a lot of that problem. Cheers!

@ben-manes
Copy link

ben-manes commented May 26, 2024

sturdyc might be a helpful reference. It has loading, request coalescing (“buffering”), fallback caching (“passthrough”), and negative caching ("non-existent") but lacks good eviction/expiration. Their internal inFlight is a future and the missing record is an optional, which is how caffeine lets callers do these. Since that’s not idiomatic in go, their api does seem nice (as a non-go developer).

@maypok86
Copy link
Owner Author

maypok86 commented Aug 25, 2024

Thoughts about Otter v2

Disclaimer: I'm sorry for such a long message. I have noted questionable (relatively) points with ⚠️. Comments will be very helpful.

Basic concepts and features

I would like to achieve the following goals in v2:

  • Make the API as intuitive as possible.
  • Try to remove areas that generate an increased cognitive load on the user, or at least reduce this load.
  • Make simple things as simple as possible and allow complex things to be done.
  • Hide advanced functionality to keep the main API minimalistic.
  • Add tools for custom collection of cache statistics.
  • Add tools to protect against cache stampedes (loading, refreshing).

Builder vs Functional options

Functional options are much more common in Go than builder and are considered more canonical by many people, but I absolutely don't like how they look with generics.

Functional options

cache, err := otter.New[string, string](100,
    WithCollectStats[string, string](),
    WithTTL[string, string](time.Hour),
    WithInitialCapacity[string, string](100),
)

vs

Builder

cache, err := otter.MustBuilder[string, string](100).
    CollectStats().
    TTL(time.Hour).
    InitialCapacity(100).
    Build()

So for now, I'm still decided to use the builder to construct the cache. ⚠️

Capacity

The user is obliged to immediately specify the cache capacity in the builder now. After that, he can also specify the function of calculating the cost of the item. The sum of all costs <= capacity specified in the builder. By default, the cost of any item is 1.

Now the API looks like this.

cache, err := otter.MustBuilder[string, string](int(10_000_000)).
    Cost(func(key string, value string) uint32 {
        return uint32(len(key) + len(value))
    }).
    InitialCapacity(int(1000)).
    Build()

It should become something like that.

A cache that specifies the capacity in the number of items.

cache, err := otter.NewBuilder[string, string]().
    MaximumSize(uint64(10_000)).
    Build()

A cache that specifies the capacity in the maximum sum of entry weights.

cache, err := otter.NewBuilder[string, string]().
    MaximumWeight(uint64(10_000_000)).
    Weigher(func(key string, value string) uint32 {
        return uint32(len(key) + len(value))
    }).
    InitialCapacity(int(1000)).
    Build()
  • Rename cost to weight, since cost is often used in the context of latency / miss penalty.
  • Separate eviction based on the number of items from eviction based on weight, since users often specify a function that always returns 1 in the builder. Because of this, otter cannot fully optimize the node size. It will also allow otter to create a cache without an eviction policy.
  • Changing the type of maximumSize and maximumWeight from int to uint64 for greater safety. Unfortunately, I'm not completely sure that it's worth doing this. ⚠️
  • Keep the initialCapacity type equal to int (preallocating the hash table, etc.). ⚠️
  • I don't like that maximumSize and initialCapacity look different, although they are very similar, but maybe there is nothing wrong with that. ⚠️

DeletionListener

The user may want to do something when deleting an entry. Now it's implemented like this:

// DeletionCause the cause why a cached entry was deleted.
type DeletionCause uint8

const (
    // Explicit the entry was manually deleted by the user.
    Explicit DeletionCause = iota
    // Replaced the entry itself was not actually deleted, but its value was replaced by the user.
    Replaced
    // Size the entry was evicted due to size constraints.
    Size
    // Expired the entry's expiration timestamp has passed.
    Expired
)

type Builder[K comparable, V any] interface {
    DeletionListener(deletionListener func(key K, value V, cause DeletionCause)) Builder[K, V]
}

I like that such an API allows the user not to use a huge number of callbacks, but the user has to carefully choose the causes he needs. From what I've seen, in most cases the user is only interested in "was the entry automatically deleted or not?". So it might be worth adding such a method:

func (dc DeletionCause) WasEvicted() bool {
    switch dc {
    case Size, Expired:
        return true
    default:
        return false
    }
}

Stats

Currently, the user cannot specify a custom implementation of the statistics collector in the builder. He can only get a snapshot of the statistics from the cache.

type Stats interface {
    Hits() int64
    Misses() int64
    Ratio() float64
    RejectedSets() int64
    EvictedCount() int64
    EvictedWeight() int64
}

...

cache, err := otter.MustBuilder[string, string](10_000).
    CollectStats().
    Build()

stats := cache.Stats()

I think this API option is good.

type Stats interface {
    Hits() uint64
    Misses() uint64
    HitRatio() float64
    MissRatio() float64
    RejectedSets() uint64
    Evictions() uint64
    EvictionWeight() uint64
}

// general contract
type StatsCollector interface {
    CollectHits(count int)
    CollectMisses(count int)
    CollectEviction(weight uint32, cause DeletionCause)
    Snapshot() Stats
}

// optional
type LoadingStatsCollector interface {
    CollectLoadSuccess(loadDuration time.Duration)
    CollectLoadFailure(loadDuration time.Duration)
}

type Builder[K comparable, V any] interface {
    // if the collectors are not specified, then we use the default one.
    // if one collector is specified, then we use it.
    // if more than one collector is specified, then we return an error.
    CollectStats(statsCollector ...StatsCollector) Builder[K, V]
}

...

// basic usage example
cache, err := otter.NewBuilder[string, string]().
    MaximumSize(10_000).
    // enable the stats collector with concurrent counters
    CollectStats().
    Build()

// with custom collector
prometheusAdapter := ...
cache, err := otter.NewBuilder[string, string]().
    MaximumSize(10_000).
    CollectStats(prometheusAdapter).
    Build()

It may be worth allowing multiple collectors to be specified or not requiring the implementation of a Snapshot... ⚠️

I really like the API more obvious to the user.

type Builder[K comparable, V any] interface {
    CollectStats(statsCollector StatsCollector) Builder[K, V]
}

// usage example
cache, err := otter.NewBuilder[string, string]().
    MaximumSize(10_000).
    CollectStats(otter.NewStatsCounter()).
    Build()

Close

Ideally, I would like the experience of using the in-memory on-heap cache to coincide as much as possible with the use of hashmaps, but since otter uses background goroutines, otter has to have the Close method in its API.

It would be great to get rid of this method, but the options for achieving this are very unpleasant.

  • Otter won't be able to use runtime.SetFinalizer will fail, as it simply won't start due to references from goroutines.
  • We can try to get rid of background goroutines and switch to finite state machine-based scheduling, but I don't want to suffer so much because of Close :).

So I guess I'll leave Close in place for now and then maybe delete its implementation and mark it as a deprecated method. ⚠️

Expiration

Almost all cache libraries in Go allow you to specify TTL for entries, but all have problems:

  • Many don't have an expiration policy at all and delete expired entries only after accessing them. (Not an otter case)
  • It's unclear exactly how TTL is reset. For example, some libraries don't update TTL during updates, which may not be obvious to the user. Or when tokens expire, you usually need to update the TTL even after reading the token from the cache. (otter is able to update the TTL only after writing)

It is proposed to switch from the term TTL to ExpiresAfter and add methods like:

type Builder[K comparable, V any] interface {
    ExpireAfterCreate(duration time.Duration) Builder[K, V]
    // it is useful if the ttl is specified in the http header
    // or to simulate a probabilistic early expiration.
    ExpireAfterCreateFunc(f func(key K, value V) time.Duration) Builder[K, V]
    // write = create + update
    ExpireAfterWrite(duration time.Duration) Builder[K, V]
    ExpireAfterWriteFunc(f func(key K, value V) time.Duration) Builder[K, V]
    // access = read + write
    ExpireAfterAccess(duration time.Duration) Builder[K, V]
    ExpireAfterAccessFunc(f func(key K, value V) time.Duration) Builder[K, V]
}

The user can select only one of the expiration methods.

In the first iteration, it's planned to add expiration only for create, write and access, but I'm very confused that then users may need separate features for read and `update'. As a result, the expiration API in the builder will turn into an absolute mess.

We can try to do something like this based on chaining.

type Builder[K comparable, V any] interface {
	ExpireAfter() ExpireAfterBuilder[K, V]
}

type ExpireAfterBuilder[K comparable, V any] interface {
	Create(duration time.Duration) Builder[K, V]
	CreateFunc(f func(key K, value V) time.Duration) Builder[K, V]
	Write(duration time.Duration) Builder[K, V]
	WriteFunc(f func(key K, value V) time.Duration) Builder[K, V]
	Access(duration time.Duration) Builder[K, V]
	AccessFunc(f func(key K, value V) time.Duration) Builder[K, V]
	Read(duration time.Duration) Builder[K, V]
	ReadFunc(f func(key K, value V) time.Duration) Builder[K, V]
	Update(duration time.Duration) Builder[K, V]
	UpdateFunc(f func(key K, value V) time.Duration) Builder[K, V]
}

But I have a strong desire to force users to pass the function always ⚠️

type Builder[K comparable, V any] interface {
	ExpireAfter() ExpireAfterBuilder[K, V]
}

type ExpireAfterBuilder[K comparable, V any] interface {
	Create(f func(key K, value V) time.Duration) Builder[K, V]
	Write(f func(key K, value V) time.Duration) Builder[K, V]
	Access(f func(key K, value V) time.Duration) Builder[K, V]
	Read(f func(key K, value V) time.Duration) Builder[K, V]
	Update(f func(key K, value V) time.Duration) Builder[K, V]
}

...

// usage example
cache, err := otter.NewBuilder[string, string]().
    MaximumWeight(10_000_000).
    Weigher(func(key string, value string) uint32 {
        return uint32(len(key) + len(value))
    }).
    CollectStats(prometheusAdapter).
    ExpireAfter().Write(func(key string, value string) time.Duration {
        return time.Hour
    }).
    Build()

P.S. We can still do something like this. Maybe it's even better than the rest. Anyway, I like it better :).

type ExpiryStrategy uint32

const (
    ExpireAfterCreate ExpiryStrategy = iota
    ExpireAfterWrite
    ExpireAfterAccess
    ExpireAfterRead
    ExpireAfterUpdate
)

type Builder[K comparable, V any] interface {
    Expire(strategy ExpiryStrategy, duration time.Duration) Builder[K, V]
    ExpireFunc(strategy ExpiryStrategy, f func(key K, value V) time.Duration) Builder[K, V]
}
...

// usage example
cache, err := otter.NewBuilder[string, string]().
    MaximumSize(10_000).
    Expire(otter.ExpireAfterWrite, time.Hour).
    Build()

Loader

The main innovation of v2 should be the ability to specify a loader, which will allow the user to receive an entry from a slow resource only once, and not send several identical requests. This usually greatly improves the tail latency.

There are several libraries in Go that allow the user to specify loaders, but only to get one entry. But usually requests are made in batches and the user wants to be able to receive a batch immediately.

This is what a typical loader that libraries ask for looks like:

func(key K) (V, error) {

}

Some notes about the wishlist:

  • The loader must use context to store various additional information about the original request.
  • I would like to allow the user to implement only one of the loaders (one or bulk).
  • Most likely, we will need to be able to optionally implement additional methods for the loader in the future. (reload to reload, refresh to asynchronously update the entry).
  • I really want to try to get another one from one method (one = bulk([]K{key}), bulk = sequental (or with goroutine pool) one(keys[i])), but it seems better to require the user to implement the loader always. In this case, no one better than the user will be able to know how to get the data better (it can be especially painful when trying to guess the bulk).
  • Sometimes the user will want to use a closure for the loader, since he won't know all the data when initializing the cache.
  • Based on this, I decided that the loader should be asked to be specified in the cache method, and the loader should be made an interface. We can also add auxiliary types to simplify the user's life.
package otter

import "context"

type Loader[K comparable, V any] interface {
    Load(ctx context.Context, key K) (V, error)
}

type LoaderFunc[K comparable, V any] func(ctx context.Context, key K) (V, error)

func (lf LoaderFunc[K, V]) Load(ctx context.Context, key K) (V, error) {
    return lf(ctx, key)
}

type BulkLoader[K comparable, V any] interface {
    BulkLoad(ctx context.Context, keys []K) (map[K]V, error)
}

type BulkLoaderFunc[K comparable, V any] func(ctx context.Context, keys []K) (map[K]V, error)

func (blf BulkLoaderFunc[K, V]) BulkLoad(ctx context.Context, keys []K) (map[K]V, error) {
    return blf(ctx, keys)
}

type Cache[K comparable, V any] interface {
    Load(ctx context.Context, loader Loader[K, V], key K) (V, error)
    BulkLoad(ctx context.Context, bulkLoader BulkLoader[K, V], keys []K) (map[K]V, error)
}

It's expected that the loader will be implemented once during cache initialization, but the user still has the ability to change the behavior using closures.

P.S. I still decided to abandon the implementation of the loader via Compute. Yes, the cache becomes non-linearizable, but there will be many other delays for the in-memory cache that should hide this. I will only need to provide for the case of a race between load and deletion. It seems that this is the only case that can bother users. Interestingly, no one is doing this in Go now and I haven't heard any complaints.

// goroutine 1
loader := func(ctx context.Context, key string) (string, error) {
    return db.Get(ctx, key)
}
value, err := cache.Load(ctx, loader, key1)

...

// goroutine 2
db.Update(key1)
cache.Delete(key1)

It's expected that there will be no entry with key1 in the cache, but there is a high probability that the entry in the cache will remain with a naive implementation using singleflight.

Logger

When calling the loader and using other future features, sometimes the user will need to log various errors, so a contract for the logger is needed. It is supposed to use the slog API, but with context. The user can simply ignore the context when implementing the logger.

type Logger interface {
    Warn(ctx context.Context, msg string, args ...any)
    Error(ctx context.Context, msg string, args ...any)
}

type Builder[K comparable, V any] interface {
    Logger(logger Logger) Builder[K, V]
}

Which logger should I use by default? I suspect that it's better to make at least one based on slog. ⚠️

Extension && Map

Sometimes it may happen that the user cannot solve his problem using the available methods. In this case, Extension and Map should come to his aid.

  • Extension is a set of hack functions that allow the user to get low-level access to cache functions (eviction policies, expiration, etc.).
  • Map is a set of functions that allows the user to use the cache as a map.

Also, these structures will allow us to hide a huge set of functions from users, otherwise the cache API will become simply huge.

So far, it's supposed to do something like this.

type Cache[K comparable, V any] interface {
    AsMap() Map[K, V]
    Extension() Extension[K, V]
}

type Map[K comparable, V any] interface {
    SetIfAbsent(key K, value V) (V, bool)
    DeleteFunc(f func(key K, value V) bool)
    Range(f func(key K, value V) bool)
}

type Extension[K comparable, V any] interface {
    GetEntry(key K) (Entry[K, V], bool)
    GetQuietly(key K) (V, bool)
    GetEntryQuietly(key K) (Entry[K, V], bool)
    Expiry() ExpiryExtension[K, V]
    Eviction() EvictionExtension[K, V]
}

// immutable
type Entry[K comparable, V any] interface {
    Key() K
    Value() V
    Weight() uint32
    ExpiresAt() int64
    ExpiresAfter() time.Duration
    HasExpired() bool
}

type ExpiryExtension[K comparable, V any] interface {
    // It's useful if the user cannot use the function to dynamically calculate the TTL.
    Set(key K, value V, duration time.Duration) (V, bool)
    SetIfAbsent(key K, value V, duration time.Duration) (V, bool)
    // it should help those who aren't satisfied with the API provided by loader.
    SetExpiresAfter(key K, duration time.Duration) bool
}

type EvictionExtension[K comparable, V any] interface {
    GetCapacity() uint64
    SetCapacity(capacity uint64) bool
}

Packages

If I put some of the code in small (sometimes even too much) packages, it seems that the API can be simplified a bit, but I'm not sure that many people will like it.

Instead of otter.DeletionCause we get deletion.Cause.
otter/deletion/case.go

package deletion

// Cause the cause why a cached entry was deleted.
type Cause uint8

const (
    // Explicit the entry was manually deleted by the user.
    Explicit Cause = iota + 1
    // Replaced the entry itself was not actually deleted, but its value was replaced by the user.
    Replaced
    // Size the entry was evicted due to size constraints.
    Size
    // Expired the entry's expiration timestamp has passed.
    Expired
)

func (c Cause) String() string {
    switch c {
    case Explicit:
        return "Explicit"
    case Replaced:
        return "Replaced"
    case Size:
        return "Size"
    case Expired:
        return "Expired"
    default:
        panic("unknown deletion cause")
    }
}

func (c Cause) WasEvicted() bool {
    switch c {
    case Size, Expired:
        return true
    default:
        return false
    }
}

Instead of otter.StatsCollector and otter.NewStatsCounter() we get stats.Collector and stats.NewCounter().

otter/stats/collector.go

package stats

import (
    "time"

    "github.com/maypok86/otter/v2/deletion"
)

type Collector interface {
    CollectHits(count int)
    CollectMisses(count int)
    CollectEviction(weight uint32, cause deletion.Cause)
    Snapshot() Stats
}

type LoadCollector interface {
    CollectLoadSuccess(loadDuration time.Duration)
    CollectLoadFailure(loadDuration time.Duration)
}

type Stats struct {
    ...
}

type counter struct {
    ...
}

func NewCounter() Collector {

}

Instead of otter.ExpireAfterWrite we get expiry.AfterWrite.
otter/expiry/strategy.go

package expiry

type Strategy uint8

const (
    AfterCreate Strategy = iota + 1
    AfterWrite
    AfterAccess
    AfterRead
    AfterUpdate
)

Then we can get a pretty nice API.

package main

import (
    "context"
    "fmt"
    "time"
    "unsafe"

    "github.com/maypok86/otter/v2"
    "github.com/maypok86/otter/v2/deletion"
    "github.com/maypok86/otter/v2/expiry"
    "github.com/maypok86/otter/v2/stats"
)

func main() {
    ctx := context.Background()
    loader, err := ...
    logger, err := ...
    cache, err := otter.NewBuilder[int, int]().
        MaximumWeight(100 * 1024 * 1024).
        Weigher(func(key int, value int) uint32 {
            return uint32(unsafe.Sizeof(key) + unsafe.Sizeof(value))
        }).
        DeletionListener(func(key int, value int, cause deletion.Cause) {
            fmt.Println(key, value, cause)
        }).
        Expire(expiry.AfterAccess, time.Hour).
        CollectStats(stats.NewCounter()).
        Logger(logger).
        Build()

	key := 1
	value, err := cache.Load(ctx, loader, key)
	cache.Extension().Expiry().Set(key, value, time.Minute)
	cache.AsMap().SetIfAbsent(key, value)
	cache.AsMap().DeleteFunc(func(key int, value int) bool {
		return true
	})
}

Or a simpler example.

package main

import (
    "time"

    "github.com/maypok86/otter/v2"
    "github.com/maypok86/otter/v2/expiry"
    "github.com/maypok86/otter/v2/stats"
)

func main() {
    cache, err := otter.NewBuilder[int, int]().
        MaximumSize(10_000).
        Expire(expiry.AfterAccess, time.Hour).
        CollectStats(stats.NewCounter()).
        Build()

    key := 1
    value := 2
    cache.Set(key, value)
    value, ok := cache.Get(key)
    cache.AsMap().DeleteFunc(func(key int, value int) bool {
        return true
    })
}

It seems that the only question is the increased number of imports. ⚠️

@maypok86
Copy link
Owner Author

maypok86 commented Aug 25, 2024

The most embarrassing dilemmas for me are

  1. Naming for MaximumSize, MaximumWeight and InitialCapacity. This is unlikely a big problem, but naming confuses me and I can't think of anything better.
  2. AsMap is perhaps the perfect naming for the Java world (I honestly took the name from caffeine). But there are no map interfaces in Go and large interfaces are not welcome. Perhaps we can try to transfer the Map to Extension methods.
  3. Taking out small packages. I can agree that stats should be put in a separate package, but this creates cyclic dependencies because of DeletionCause and as a result, such code simply doesn't compile and I also need to take out DeletionCause. expiry is generally an artificial package.
  4. The expiration API in the builder, but I hope for comments. :)

P.S. Refreshing will most likely go to the next version after v2. I'll post the sketches later, but I will need comments from Ben (there may be problems with implementation and use cases) and someone from the Go world, because there is a chance to touch on ideological things in the API.

@ben-manes
Copy link

ben-manes commented Aug 25, 2024

Sorry I can't give much advise beyond saying your general direction looks good. I always review the Bumper-Sticker API Design rules before cementing a public api, hoping that I got more right than wrong. Writing unit tests helps me use them and feel their pain, though sometimes just wanting to get it done has caused a few small mistakes where I should have known better.

  1. Those names were to not conflict in Java's Collections where size is the logical size and capacity is the physical size, and then weight was clearer (given cost is ambiguous). Usually developers never touch a collection's capacity so it causes minor confusion, but Java is consistent about the distinction throughout. Similarly some think access means only read, but in LinkedHashMap it defines access-order all read or write (and insertion-order as creation), so we tried to align our usages while trying to be succinct and obvious. If Go has a consistent terminology in its core then that is a good default to align with, else whatever is least confusing. I don't put any emphasis on what other 3rd party library authors, only review their api for what I like or dislike.
  2. There is so much goodness in Java's Map, but it is very bulky. The interface's familiarity and broad usage makes a joy to use because it naturally integrates with everything and hides a lot from the core api. I suppose without a shared set of collection types then moving into extension methods makes sense. Its really what feels idiomatic in the language that counts.
  3. I'm not sure why DeletionCause has anything to do with stats? In Caffeine's, splitting it into a different package wasn't beneficial but Java doesn't support package objects like your api idea uses (but does allow cyclical packages), so no opinion here.
  4. I prefer the builder methods, but for compatibility due to the late arrival of variable expiration I had to go with static factories, e.g. .expiryAfter(Expiry.creating((key, value) -> duration)).
  5. Refresh is one of the more complex features since users wants linearizability, asynchronous loads, to be stampeded on by explicit writes, implicit or explicitly called, deduplications, and async/sync cache styles. There are a few valid use-cases for a variable duration (but often asked for misuse) which requires a careful API design, and I've largely avoided working on that. Then more edge cases like the caller loading from source with a fallback to the cache (explicit refresh with future chaining to fallback to a cache read), and jitter / batching / retrials (I defered to composing with other libraries). The implementation code is definitely not pretty or fun to work on...
  6. I really like the state machine scheduling, but then it was a necessity for Java that is a fair amount of work to add on to this list.
  7. Map computations are one of my favorite quality of life features for more advanced integrations. It makes it much easier to orchestrate concurrency and lifecycle code, both internally to the cache for its data and externally for users. This also has some benefits in terms of the listeners running inside/outside of the computation.
  8. Bulk loading is nice but honestly it does not get used very often. It is also a hard to support linearizability, bulk loading, and stampede protection in sync/async flavors. It requires giving something up per flavor, which could partially be recovered by deep changes to the hash table (but even that is questionable to the resulting tradeoffs). I suspect that coalescing is actually a pretty good alternative for most who want bulk.
  9. I would not discount testing as a necessary project feature. I spent vastly more time on that because I couldn't keep everything in my head and its much easier to avoid regressions when the test suite brute forces every scenario.

@element-of-surprise
Copy link

element-of-surprise commented Aug 26, 2024

Builder vs Functional options:

Indeed generics makes that ugly. At first I got caught in the same mental trap you are in, because you are used to seeing it designed in a certain way (and many other people in fact).

But that new form is only required "if and only if" you try to modify the generic type. You only need to add a level of abstraction to get back to clean options with almost zero cost.

So, instead of modifying your generic type A[K any, V any]struct, you modify a type options struct that is held within "A". Then you no longer require instantiation of your options.

Here is an example: https://go.dev/play/p/BOAX-1a9rih

For the user, that should come out cleaner than the Builder method.

@maypok86
Copy link
Owner Author

About refreshing and caching asynchronous computations.

Let's start with the fundamental problem first.

Future/Promise in Go

They simply don't exist in Go and any attempts by people to write them based on channels are usually pelted with tomatoes.

The reason is that channels are already the Future for most tasks, and if you also remember about the context.Context, then the meaning of the wrapper becomes even less clear.

For example, singleflight uses channels to return asynchronous computations.

Okay, then why not just keep the channels in the cache for this?

Because you can get data from the channel only once. After that, reads from the channel will be blocked or return a zero value if the channel is closed. That is, there is simply no point in caching channels and some kind of wrapper is needed.

Then what should the asynchronous computing type have?

type Result[V any] struct {
    isDone atomic.Bool
    done   chan struct{}
    cancel context.CancelFunc
    value  V
    err    error
}

func NewResult[K comparable, V any](ctx context.Context, key K, call func(ctx context.Context, key K) (V, error)) *Result[V] {
    ctx, cancel := context.WithCancel(ctx)
    r := &Result[V]{
        done:   make(chan struct{}, 1),
        cancel: cancel,
    }
    go func() {
        r.value, r.err = call(ctx, key)
        if err := ctx.Err(); err != nil && r.err == nil {
            r.err = err
        }
        r.done <- struct{}{}
        close(r.done)
        r.isDone.Store(true)
    }()
    return r
}

// Done is needed so that Result can be used in select.
func (r *Result[V]) Done() <-chan struct{} {
    return r.done
}

// Get should be blocked until the computation of the results is complete.
func (r *Result[V]) Get() (V, error) {
    if r.isDone.Load() {
        return r.value, r.err
    }
    <-r.done
    return r.value, r.err
}

func (r *Result[V]) Value() V {
    value, _ := r.Get()
    return value
}

func (r *Result[V]) Error() error {
    _, err := r.Get()
    return err
}

// Cancel is needed so that the user can cancel the computation.
func (r *Result[V]) Cancel() {
    r.cancel()
}

As a result, we get quite a pleasant type.

And now about what I don't like.

Thanks to the asynchronous computing type, we can ask the user to implement a custom AsyncReload when refreshing an entry.

type AsyncReloader[K comparable, V any] interface {
    AsyncReload(ctx context.Context, key K) (*Result[V], error)
}

type Reloader[K comparable, V any] interface {
    Reload(ctx context.Context, key K) (V, error)
}

Is the AsyncReloader really needed? Why is it not enough to just have a Reloader that will run in a separate goroutine?

Also, as Ben pointed out, bulkLoader can be implemented based on AsyncCache, which will store asynchronous computations.

But I have big questions about such a cache in Go.

  1. You will always have to store very large values that are not really needed most of the time. What I like about singleflight is that it stores asynchronous computations only while they are running.
  2. I do not know what AsyncCache[K, V] = Cache[K, *Result[V]] can be useful for in Go.
  3. I haven't found anything suitable for CoalescingBulkLoader, but it can be written quickly enough.

I actually don't like what caffeine does with compute to achieve linearizability for several reasons.

  1. If several requests get into the same bucket, everything will most likely end up canceling requests due to a timeout.
  2. Delays (not too bad) and increased memory consumption for BulkLoad.

Refreshing

In fact, Ben has already written almost everything about it.

I was only interested in options other than refreshAfterWrite so far. But I'll probably add RefreshAfterWriteFunc and wait for suggestions.

P.S. It seems that I came up with a linearizable implementation with Compute, without modifying the hash table and without calling loader under the bucket lock, but it will be difficult to write.

@maypok86
Copy link
Owner Author

But that new form is only required "if and only if" you try to modify the generic type. You only need to add a level of abstraction to get back to clean options with almost zero cost.

Yes, but unfortunately, options must also be with generics.

type options[K comparable, V any] struct {
    maximumSize uint64
    maximumWeight uint64
    weigher func(key K, value V) uint32
    deletionListener func(key K, value V, cause DeletionCause)
    ...
}

I'm not sure why DeletionCause has anything to do with stats?

Because StatsCollector depends on DeletionCause but maybe it's not necessary.

type StatsCollector interface {
    CollectHits(count int)
    CollectMisses(count int)
    CollectEviction(weight uint32, cause DeletionCause)
    Snapshot() Stats
}

@element-of-surprise
Copy link

element-of-surprise commented Aug 26, 2024

You can still do it without instantiating options as generic, but it does mean you have to use any and do a type check farther down. So you are suspending a compile time check for a runtime check. In practice, I've done this before with zero ill effects.

Basically, deletionListener becomes any and you pop the generic types off. After the options are executed, you type check deletionListener for the type you are expecting, if so you assign the concrete type to deletionListener on your parent type.

https://go.dev/play/p/Cl1l4qhPxWR

@ben-manes
Copy link

Is the AsyncReloader really needed? Why is it not enough to just have a Reloader that will run in a separate goroutine?

Caffeine calls asyncReload which wraps the reload method with a future, and that defaults to calling the load that must be defined. We don't want just a synchronous reload method in case the user only gets a future in their code, so then they would block just to return a value and waste an OS thread. Otherwise its for convenience to let users write in the simplest variant of the code and the cache always gets an async version regardless. The use of goroutines instead of OS threads could make this moot.

You will always have to store very large values that are not really needed most of the time.

In a refresh scenario on a synchronous cache, the future value is held in a secondary map while in-flight and upon completion it updates the cache if still valid.

In an AsyncCache it holds the future as the mapped value since the returned entry might be canceled, used in an atomic asMap operator (where equality is by identity), generating a new future to wrap the completed value per read adds GC overhead, and likely many other quirks to juggle. I suspect most cases of AsyncCache are fairly small and larger caches use the synchronous one simply by nature of their relative use-cases, and that the synchronous Cache is much more common.

ConcurrentHashMap and Guava's Cache both manage the underlying hash table and use an intermediate entry type while the load is in-flight (LoadingValueReference, ReservationNode). The JDK's is synchronized so it blocks the caller, is linearizable, but rigidly only able load a single entry. In Guava this is a proper future, but it does imply lack of linearization, but sadly was not leveraged for proper bulk loading (side loads). Caffeine offers both flavors, where AsyncCache properly bulk loads by inserting multiple placeholder values, which allows the user to decide their linearizable expectations (by what constitutes the table mapping). Since it strictly decorates the JDK's map there isn't lower-level customizations that might somehow offer linearizable bulk synchronous loads, and neither forking/rewriting or layering on top another concurrency orchestrator seemed worthwhile.

I actually don't like what caffeine does with compute to achieve linearizability for several reasons.... If several requests get into the same bucket

The author of ConcurrentHashMap had ideas for transitioning to per-item computes to avoid this bucket issue, but I don't think anything has or will be done there. I think because the number of buckets increases with capacity so it is easy to reduce the chance of collisions, it could be complex rework, he is a busy department chair (and perhaps nearing retirement), and most users who have that problem seem fine with losing linearization to instead load within a future value. That's all to say that I get the benefit of blaming the JDK so users don't expect me to deal with it myself.

Because StatsCollector depends on DeletionCause but maybe it's not necessary.

Oh yeah! That was by a user request who wanted it in their metrics (for whatever reason). But they could also simulate it by the removal listener so it was just a nice to have refinement (deprecated and removed the older variants). Originally it was merely recordEviction(), where the weight was added when debugging why eviction halted for Apache Druid (we narrowed it down to a JDK bug requiring a minor version bump). Whether you bother with either of those is your call, was no harm to have but I'm unsure how useful they really are tbh.

@maypok86
Copy link
Owner Author

Okay, I need to try to implement LoadingCache. I don't see much point in further discussion about linearizability via Compute.

If anyone is interested, then I really don't want to expose *Result[V] from the library, that is, I want to do without AsyncCache and AsyncLoad/AsyncReload.

So far, I've started with edits to the statistics collection and it would be interesting to hear someone's opinion.

I would like to make collecting most of the statistics optional, that is, the builder will accept only a minimal Collector interface. Everything is fine here and even the collector methods are very powerful and allow the user to implement the logic he needs.

type Builder[K comparable, V any] interface {
    CollectStats(statsCollector Collector) Builder[K, V]
}

type Collector interface {
    CollectHits(count int)
    CollectMisses(count int)
    Snapshot() Stats
}

type EvictionCollector interface {
    CollectEviction(weight uint32)
}

type RejectedSetsCollector interface {
    CollectRejectedSets(count int)
}

type LoadCollector interface {
    CollectLoadSuccess(loadTime time.Duration)
    CollectLoadFailure(loadTime time.Duration)
}

func (c *Cache[K, V]) Stats() Stats {
    return c.collector.Snapshot()
}

Here's what confuses me.

Does the cache need the Stats method if otter will always require specifying an implementation (possibly the default implementation)? I don't think so. This means that we will be able to remove the Snapshot from the Collector interface as well

I ask this because it is very likely that the user will not like most of the data in Stats. For loads (and evictions maybe), you will want to calculate quantiles and so on. A couple of counters won't be of much use.

P.S. So far, I've decided to use the builder. I don't want to lose type safety just for the sake of functional options.

@ben-manes
Copy link

fwiw, my approach was to capture stats that only the cache knows or are universally desirable. Then defer to the user to add their own instrumentation, e.g. puts or explicit removes. The few stats means less cpu/memory overhead and bike shedding, and a few disappointed users who dislike writing a few lines of extra code.

Guava's stats() method predated common metrics reporting libraries, as they were homegrown at each company with varying opinions on push vs pull based accumulation. Google's philosophy was monatomic counters that were scrapped from an http endpoint and aggregated remotely, and Guava was foremost for Google's usages that was made available for others (primarily other OSS Google projects) and less influenced by what the broader community might want. I believe Prometheus was a direct clone / descendent of that monitoring system. So for them registering a callback to poll the counts was more idiomatic than pushing them to intermediate counters that were polled from, but it predated today's metric api designs.

I don't know what the "default implementation" would do if there was no way to capture those metrics from stats(), wouldn't it always need to be custom to push elsewhere? I don't have a strong opinion about whether to keep or drop it, except it does offer a minor convenience and reduces new users confusion when they're hunting around without reading docs.

@maypok86
Copy link
Owner Author

I don't know what the "default implementation" would do if there was no way to capture those metrics from stats(), wouldn't it always need to be custom to push elsewhere?

I rather meant that the default implementation (StatsCounter with multiple uint64 counters) will always implement all collector interfaces, and also have a Snapshot method just to get metrics for the pull based approach/testing/lazy user. I just don't want to require the implementation of a Snapshot by a user who implements a custom collector, since the probability of satisfying the user tends to zero. Or I can also move the Snapshot to a separate interface and if it is not implemented, then Cache.Stats() will just always return Stats with zeros.

import (
    "github.com/maypok86/otter/v2/stats"
    "github.com/maypok86/otter/v2"
)

type AwesomeCache struct {
    cache        *otter.Cache[string, string]
    statsCounter *stats.Counter
}

func NewAwesomeCache(maximumSize uint64, ttl time.Duration) (*Cache[string, string], error) {
    statsCounter := stats.NewCounter()
    cache, err := otter.NewBuilder[string, string]().
        MaximumSize(maximumSize).
        ExpireAfterWrite(ttl).
        CollectStats(statsCounter).
        Build()
    if err != nil {
        return nil, err
    }

    return &AwesomeCache{
        cache: cache,
        statsCounter: statsCounter,
    }, nil
}

func (ac *AwesomeCache) Get(key string) (string, bool) {
    return ac.cache.Get(key)
}

func (ac *AwesomeCache) Stats() stats.Stats {
    return ac.statsCounter.Snapshot()
}

maypok86 added a commit that referenced this issue Aug 29, 2024
This should cause less confusion because 'cost' is often used in the context of latency/miss penalty when fetching an item.

See #74 (comment) for more details.
@proost
Copy link

proost commented Aug 29, 2024

@maypok86 Thank you for your efforts. Overall looks good.

I’m not english native speaker, so maybe i am wrong. Does Full is more intuitive than Size in deletion.Cause? Because other causes means state of entry or cache, but size is irrelevant to both.

Current stats.Collector looks like push model to me. So stats.Collector is always called when some metrics are changed or called periodically?

@maypok86
Copy link
Owner Author

Current stats.Collector looks like push model to me.

I couldn't think of a better name. Perhaps stars.Recorder would be better...

So stats.Collector is always called when some metrics are changed or called periodically?

When some metrics are changed. Maybe an example of calls to Collect* methods will help.

func (c *Cache[K, V]) Get(key K) (V, bool) {
    value, ok := c.hashmap.Get(key)
    if !ok {
        c.statsCollector.CollectMisses(1)
        return zeroValue[V](), false
    }

    c.statsCollector.CollectHits(1)
    return value, true
}

Does Full is more intuitive than Size in deletion.Cause?

I'm not a native speaker either, so I don't know :). I don't really like Explicit either (Replaced and Expired are clear). I once just copied these names from caffeine because I didn't like the causes in theine and ristretto. So the names of the causes in caffeine just reflected more situations and I knew that I would not touch it anymore before v2.

Ideally, the causes should be associated with a cache entry (Expired, Replaced, Rejected). In the case of Size and Explicit, you need to figure out the meaning. I would easily rename Size to Evicted, but most people will also understand Expired by this.

I'm not sure it's worth finding fault with very much because there are quite a lot of controversial places in the deletion API.

  1. Listener is almost not used in Go (only in the net package). The term Handler is usually used instead. So DeletionHandler instead of DeletionListener should be cleaner.
  2. It may be unpleasant to add new features to the current callback due to the requirement of three arguments (key, value, cause). It's not a fact that it will be necessary, but anyway. To fix it, we can change the contract to something like this.
type Builder[K comparable, V any] interface {
    DeletionHandler(deletionHandler func(e DeletionEvent[K, V])) Builder[K, V]
}

type DeletionEvent[K comparable, V any] struct {
    Key   K
    Value V
    Cause DeletionCause
}

func (e DeletionEvent[K, V]) WasEvicted() bool {
    return e.Cause.WasEvicted()
}
  1. In Go, enums are usually called in a different way. Ideally, it should be CauseExpired or DeletionCauseExpired instead of Expired and so on.
  2. The deletionHandler call occurs after deletion from the hash table with a delay due to the design of working with eviction and expiration policies. The delay should be minimized now, but I can still imagine a situation in which the user would want to call the deletionHandler during deletion from the hash table. And I do not know how to add this to the current API or whether it is worth doing at all.
type Builder[K comparable, V any] interface {
    AtomicDeletionHandler(deletionHandler func(e DeletionEvent[K, V])) Builder[K, V]
}

// or change the names and do something like that

type Builder[K comparable, V any] interface {
    OnDeletion(deletionHandler func(e DeletionEvent[K, V])) Builder[K, V]
    OnAtomicDeletion(deletionHandler func(e DeletionEvent[K, V])) Builder[K, V]
}

@ben-manes
Copy link

fwiw, here's the historical evolution,

It may be unpleasant to add new features to the current callback due to the requirement of three arguments

This is what we did in Guava's Cache, where it was the RemovalNotification type. That api was designed for Java 5, which had introduced generics, where we used a builder style that inferred the type from the chain to avoid repeating it at construction. However when I rewrote the cache for Java 8, which introduced lambdas, that looked verbose by needing to be declared like removalCause((RemovalNotification<K, V> notification) -> logger.info(notification.getKey())). Since no one has ever asked for additional attributes, I decided to decompose it to the three arguments. I wanted it to be a drop-in upgrade so I didn't seriously consider redoing the builder as explicit types upfront and maybe I should have. The overall complexity, lack of peers to iterate on api, being a personal project... I was just happy what came for free from past efforts and cut corners.

I can still imagine a situation in which the user would want to call the deletionHandler during deletion from the hash table. And I do not know how to add this to the current API or whether it is worth doing at all.

Caffeine's AsyncCache made this tricky because it could be an explicit removal of an in-flight future value. It doesn't make sense to block to wait for the future to complete nor is it desirable to disallow that listener combination. Since an in-flight future is not evictable, I narrowed the atomic case down to evictionListener for those automatic removals and relied on computations for manual removals so it was a user problem if relevant and made their code's behavior more explicit. The only flaw is that the builder's naming is not obvious about the atomic nature, which was a small penalty from api evolution, as in Guava we didn't support atomic callbacks due to the weaker hash table design that predated computes (highly coarse striped locking) so by definition the listener delayed until after unlocking.

I don't really like Explicit either

An alternative name might be MANUAL? I tried to define some terminology, but after the apis were in use.

  • eviction means removal due to the policy
  • invalidation means manual removal by the caller
  • removal occurs as a consequence of invalidation or eviction

maypok86 added a commit that referenced this issue Sep 30, 2024
This should cause less confusion because 'cost' is often used in the context of latency/miss penalty when fetching an item.

See #74 (comment) for more details.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request v2
Projects
None yet
Development

No branches or pull requests

5 participants