Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BERT for non-TCP/WS applications #17

Open
chrysn opened this issue Aug 18, 2021 · 3 comments
Open

BERT for non-TCP/WS applications #17

chrysn opened this issue Aug 18, 2021 · 3 comments
Labels
enhancement New feature or request

Comments

@chrysn
Copy link
Member

chrysn commented Aug 18, 2021

The way RFC8323 is written defines BERT (ie. using block-wise's szx=7 to mean "indexed at 1024 byte blocks, but may have several of them in the payload") only for reliable transports.

As this would be useful in other applications outside TCP/WS, and as it is unlikely that any other extension to block-wise would scoop up this extension point of RFC7959 for unreliable transports in a different way, I suggest that the next update to 7959 just acknowledge that BERT is universal.

Concrete applications include:

  • Amortizing block-wise overheads in OSCORE: An OSCORE transfer can be inner- and outer-blockwised independently.

    • All outer-blockwising may be too taxing for the device (which'd have to assemble the representation in RAM) or even the AEAD algorithm (AES-16-64-128 has a 64KiB limit).
    • All inner-blockwising creates a per-message overhead of at least 8 byte in 1:1 and 64 in group mode.

    The balancing of the inner block size is currently limited to 16..1024 byte, whereas BERT would open that space up to any multiple of 1024 bytes.

  • With CoAP being used outside of constrained devices (as in DOTS), it's not unreasonable any more that MTU discovery is available and finds support for jumboframes.

  • Internally (eg. inside a CoAP implementation) it can be convenient to let the library handle some block-wising (not bother the application with every 32 bytes of requested data) but not all of it (eg. because not all data is ready yet); BERT would allow serving larger chunks and letting the proxy that is the CoAP library do the small things.

I think that this can be done by just stating things in an update. It'd probably say that for CoAP-over-{TCP,WS} the CSM is the way to agree, and that other transports may define their mechanisms (of which there currently are none) to indicate a non-default maximum.

[edit: github's markdown is weird. probably, all markdown is weird.]

@boaks
Copy link

boaks commented Oct 20, 2021

Using BERT for cloud internal communication (jumboframes) sounds great.

If there is really interest in standardize using CoAP cloud internal, one of the drawbacks, I was faced, is the limitation by the MID deduplication definition. FMPOV, it doesn't make too much sense, to keep the MIDs (and related message) for up to 240s (or 120s), if mainly a point2point communication is used with many, many messages. In Californium we implemented and alternative approach, using a maximum number of deduplication messages per peer .
So far, the experience with that is very promising.

@chrysn
Copy link
Member Author

chrysn commented Oct 20, 2021

When peers are datacenter-internal, I reckon that a lot of parameters would be used differently (DEFAULT_LEISURE to 0, ACK_TIMEOUT to something about 100ms, maybe also MAX_RETRANSMIT to 2), and then EXCHANGE_LIFETIME goes down by a lot. (Probably FASOR does that much better, with adaequate concern for TSV topics). Also, it probably helps a lot if cheap idempotent processings are declared to the CoAP stack so that the stack can forego deduplication.

What would typical jumboframe sizes be in such cloud-internal contexts?

@boaks
Copy link

boaks commented Oct 20, 2021

What would typical jumboframe sizes be in such cloud-internal contexts?

Californium - Issue

jumbo MTU 9001. So 8192 may be used for BERT.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants