Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Any way to get the logits instead of logprobs in lmdeploy? #2507

Open
hmzo opened this issue Sep 24, 2024 · 2 comments
Open

[Feature] Any way to get the logits instead of logprobs in lmdeploy? #2507

hmzo opened this issue Sep 24, 2024 · 2 comments
Assignees

Comments

@hmzo
Copy link

hmzo commented Sep 24, 2024

Motivation

I now want to use lmdeploy to deploy internlm2-7b-reward.
I simply expanded the weight in v_head from [1, D] to [V, D].
If we can directly obtain the logits instead of logprobs , then we can deploy the reward model using lmdeploy easily.

Related resources

No response

Additional context

No response

@lvhan028
Copy link
Collaborator

lvhan028 commented Sep 24, 2024

@irexyc

@lvhan028 lvhan028 assigned lvhan028 and irexyc and unassigned lvhan028 Sep 24, 2024
@irexyc
Copy link
Collaborator

irexyc commented Sep 29, 2024

Sorry for late reply.

Currently, you can use pipe.get_logits to obtain the logits with seq_len x vocab_size shape. It's not thread safe, I'm not sure if this meets your needs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants