Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make it possible to ignore parameters #24

Open
mhinsch opened this issue Feb 9, 2023 · 5 comments
Open

make it possible to ignore parameters #24

mhinsch opened this issue Feb 9, 2023 · 5 comments

Comments

@mhinsch
Copy link

mhinsch commented Feb 9, 2023

This is probably either too much work or outside of the scope of the package altogether, but it would be very useful if there was a way to tell the macro which of the function parameters are expected to vary between function calls.

For example:

@memoize Dict function propActiveAgents(model, state, pars)
    n = count(a -> a.state == state, model.pop)
    n/length(model.pop)
end

In this case effectively only the parameter state will vary between function calls. model (or model.pop) could change as well, but in a realistic real-world scenario I will refresh the memoization cache manually, for example every time step, rather than letting @memoize detect the change in model state as that would be vastly less efficient. The third parameter, pars isn't used at all in this case but needs to be there for API consistency.

As it works now, a tuple of all three parameters will be used as a key in a Dict{Any, Any} if I understand the implementation correctly. If there was a way to tell the macro that it should only use state as an index, however, I could for example choose Array{Float64} as a container type, which would be a lot more efficient.

@marius311
Copy link
Owner

marius311 commented Feb 9, 2023

Yea, you're right there doesn't seem to be a great way to handle this. Would something like this work though?

struct propActiveAgents
    model
    pars
end

@memoize Dict function (p::propActiveAgents)(state)
    (;model, pars) = p
    n = count(a -> a.state == state, model.pop)
    n/length(model.pop)
end

p = propActiveAgents(model, pars)

p(state)
p(state) # should be memoized

That'll be the same cache Dict as long as the p object doesn't change, and the lookup will only use state. (I think, you should double check me with some print statements)

@mhinsch
Copy link
Author

mhinsch commented Feb 21, 2023

That's a clever solution and does indeed seem to work.

The only problem remaining seems to be that the type @memoize uses as a key is a bit convoluted. For a single integer argument for example I get Tuple{Tuple{Int64}, Tuple{}}. This is orthogonal to the parameter selection problem, though, so I'll start a new issue for that.

@alhirzel
Copy link

alhirzel commented Sep 25, 2023

I agree that the ability to ignore parameters would be handy in some cases. One use case is ignoring an argument that is a global acceleration structure.

I wonder if there is appetite for something like this?

@memoize_only [input1, input2] ThreadSafeDict \
    long_function(input1, input2, accel) = ...

An alternative, of course, is to hand-roll the memoization logic. It may be worth mentioning in the documentation that this is preferred (if it is). Something like @memoize_only could certainly become very error-prone, which is perhaps reason enough to exclude it.

@marius311
Copy link
Owner

I would do it like

@memoize ThreadSafeDict function long_function(input1, input2, @nomem(accel))

and I could see it being pretty handly. Might look at it at some point.

In your case, if accel is really a global, can you just make it a global? Or perhaps a Base.task_local_storage ?

@alhirzel
Copy link

alhirzel commented Sep 25, 2023

In my specific case, the data is hierarchically altered and not "global" in the typical sense or even within a thread - should have been more clear :(

I like your @nomem concept and it crossed my mind too to have an annotation at the parameter level.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants