Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open Question: Add async/streaming interface? #26

Open
yuhonglin opened this issue Mar 10, 2022 · 1 comment
Open

Open Question: Add async/streaming interface? #26

yuhonglin opened this issue Mar 10, 2022 · 1 comment

Comments

@yuhonglin
Copy link
Contributor

Currently, the interface is synced. That is, for every input "x", the client will get some output "y=f(x)".

But for some application, the input/output may be asynced. Take speech recognition as an example, an typical usage may be,

  1. The client starts the recognition.
  2. The client keeps feeding audio data to the model, without receiving any output.
  3. When the model thinks the input audio data is enough to make a reasonable prediction, it will actively tells the client the result.
  4. The client stops the recognition.

So the client will need to provide an "OnPredictionResult" callback to the model.

In some cases, it is the model that actively asks for input (e.g. when the model thinks it is ready). Then there will be no Step 2 and the client needs to provide an "OnGetInput" callback to the model.

This is not a blocker for now, but just an interesting issue to think about.

@yuhonglin
Copy link
Contributor Author

yuhonglin commented Mar 28, 2022

#33 integration with insertable streams should be a better solution. What we need to do is to have VideoFrame etc. as input.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant