Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: annotated decoder #4949

Open
wants to merge 1 commit into
base: coot/annotated-decoder
Choose a base branch
from

Conversation

crocodile-dentist
Copy link
Contributor

todo: tests

Description

Annotated decoder

Checklist

Quality

  • Commit sequence makes sense and have useful messages, see ref.
  • New tests are added and existing tests are updated.
  • Self-reviewed the PR.

Maintenance

  • Linked an issue or added the PR to the current sprint of ouroboros-network project.
  • Added labels.
  • Updated changelog files.
  • The documentation has been properly updated, see ref.

@crocodile-dentist
Copy link
Contributor Author

Having a single runDecoderWithChannel and runDecoderWithLimit avoids some code duplication to ease maintenance, but to avoid leaking space for codec's that do not need annotation capability there should probably be a custom monoid instance for the bytes type which is strict in the accumulator and essentially a no-op.

@@ -14,6 +18,13 @@ import Data.Measure qualified as Measure
import NoThunks.Class (NoThunks (..))
import Quiet (Quiet (..))

data WithBytes a = WithBytes { wbValue :: !a,
unannotate :: ShortByteString }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we use ShortByteString this means we have to copy the bytestring on the heap, isn't it. With Lazy.ByteString we won't have this problem.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but if we have annotations floating around in regular bytestrings, it can cause heap fragmentation because they are pinned.

Comment on lines 221 to 175
runAnnotatedPeer
:: forall ps (st :: ps) pr failure bytes m a .
( MonadThrow m
, Monoid bytes
, Show failure
, forall (st' :: ps). Show (ClientHasAgency st')
, forall (st' :: ps). Show (ServerHasAgency st')
, ShowProxy ps
)
=> Tracer m (TraceSendRecv ps)
-> AnnotatedCodec ps failure m bytes
-> Codec ps failure m annotator bytes
-> Channel m bytes
-> (forall st. annotator st -> bytes -> SomeMessage st)
-> Peer ps pr st m a
-> m (a, Maybe bytes)
runAnnotatedPeer tracer codec channel peer =
runPeer tracer codec channel runAnnotator peer =
runPeerWithDriver driver peer (startDState driver)
where
driver = annotatedSimpleDriver tracer codec channel
driver = mkSimpleDriver runAnnotator tracer codec channel
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought that exposing the original runPeer makes sense if we think about integration: all, but one protocols, don't need annotator right now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see now.

)
=> (ctx -> ( Tracer m (TraceSendRecv ps)
, Codec ps failure m bytes
, Codec ps failure m annotator bytes
, forall st. annotator st -> bytes -> SomeMessage st
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ugh it needs to show up in a lot of places now. Is it worth it. It exposes internal details of how the peer is run. Maybe we've gone too far.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I've noticed when I was done 😅 , but for presentation purposes just decided to show it as is.

-> Codec (LocalTxSubmission tx reject) CBOR.DeserialiseFailure m ByteString
-> Codec (LocalTxSubmission tx reject) CBOR.DeserialiseFailure m SomeMessage ByteString
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also a bad consequence that we need to modify all our codes, for the purpose of adding a feature to just one of them.

Comment on lines +213 to +218
SomeMessage . MsgReplyTxs $
txs <&> \(tx, start, end) ->
let slice = BSL.take (end - start ) $ BSL.drop start bytes
in WithBytes {
wbValue = tx,
unannotate = toShort . BSL.toStrict $ slice })
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remeber that we decided to pass a tx -> SizeInBytes to the inbound side of tx-submission. So we don't really need to change things here.

And if we did accept this codec change, then I'd use WithBytes tx everywhere, then we don't need to modify Protocol instance and pollute it with unrelated changes (e.g. the WithBytes tx just in a single place). As a side effect we could take advantage of unannotate destructor as an encoder.

Note that if we stick with Lazy.ByteString then we won't need toShort . BSL.toStrict here at all. All chunks of bytes are evaluated anyway on the heap, maybe just the spine of the lazy bytestring bytes isn't, since we concat it from multiple fully evaluated pieces.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was written for demo purposes so certain compromises were made in terms of WithBytes tx, but yes we will use the tx -> SizeInBytes for the actual implementation.

@crocodile-dentist crocodile-dentist force-pushed the mwojtowicz/annotated-decoder branch 3 times, most recently from 4a0b1b5 to 2b90a63 Compare September 6, 2024 15:29
@coot coot changed the title WIP WIP: annotated decoder Sep 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

2 participants