Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'Data.Map.fromList' throws error #180

Open
suesslin opened this issue Apr 7, 2018 · 7 comments
Open

'Data.Map.fromList' throws error #180

suesslin opened this issue Apr 7, 2018 · 7 comments

Comments

@suesslin
Copy link

suesslin commented Apr 7, 2018

I've tried to install tensorflow v0.1.0.2, but it throws the following errors.
I'm on macOS High Sierra and have installed all the tools mentioned in the script.

Any ideas?

Warning: The package list for 'hackage.haskell.org' is 56 days old.
Run 'cabal update' to get the latest list of available packages.
Resolving dependencies...
Configuring tensorflow-proto-0.1.0.0...
Building tensorflow-proto-0.1.0.0...
Failed to install tensorflow-proto-0.1.0.0
Build log ( /Users/lukas/.cabal/logs/ghc-8.2.2/tensorflow-proto-0.1.0.0-D7saAu502llB6QqUq2V7pm.log ):
cabal: Entering directory '/var/folders/2n/wsfq37nn2dv23rx8wcsn9d6m0000gn/T/cabal-tmp-49665/tensorflow-proto-0.1.0.0'
[1 of 1] Compiling Main             ( /var/folders/2n/wsfq37nn2dv23rx8wcsn9d6m0000gn/T/cabal-tmp-49665/tensorflow-proto-0.1.0.0/dist/setup/setup.hs, /var/folders/2n/wsfq37nn2dv23rx8wcsn9d6m0000gn/T/cabal-tmp-49665/tensorflow-proto-0.1.0.0/dist/setup/Main.o )
Linking /var/folders/2n/wsfq37nn2dv23rx8wcsn9d6m0000gn/T/cabal-tmp-49665/tensorflow-proto-0.1.0.0/dist/setup/setup ...
Configuring tensorflow-proto-0.1.0.0...
Preprocessing library for tensorflow-proto-0.1.0.0..
Building library for tensorflow-proto-0.1.0.0..
[ 1 of 18] Compiling Proto.Tensorflow.Core.Framework.AllocationDescription ( dist/build/autogen/Proto/Tensorflow/Core/Framework/AllocationDescription.hs, dist/build/Proto/Tensorflow/Core/Framework/AllocationDescription.o )

dist/build/autogen/Proto/Tensorflow/Core/Framework/AllocationDescription.hs:159:15: error:
    • Couldn't match expected type ‘Data.ProtoLens.MessageDescriptor
                                      AllocationDescription’
                  with actual type ‘Data.Map.Map
                                      Prelude.String
                                      (Data.ProtoLens.FieldDescriptor AllocationDescription)
                                    -> Data.ProtoLens.MessageDescriptor AllocationDescription’
    • Probable cause: ‘Data.ProtoLens.MessageDescriptor’ is applied to too few arguments
      In the expression:
        Data.ProtoLens.MessageDescriptor
          (Data.Map.fromList
             [(Data.ProtoLens.Tag 1, requestedBytes__field_descriptor),
              (Data.ProtoLens.Tag 2, allocatedBytes__field_descriptor),
              (Data.ProtoLens.Tag 3, allocatorName__field_descriptor),
              (Data.ProtoLens.Tag 4, allocationId__field_descriptor), ....])
          (Data.Map.fromList
             [("requested_bytes", requestedBytes__field_descriptor),
              ("allocated_bytes", allocatedBytes__field_descriptor),
              ("allocator_name", allocatorName__field_descriptor),
              ("allocation_id", allocationId__field_descriptor), ....])
      In the expression:
        let
          requestedBytes__field_descriptor
            = Data.ProtoLens.FieldDescriptor
                "requested_bytes"
                (Data.ProtoLens.Int64Field ::
                   Data.ProtoLens.FieldTypeDescriptor Data.Int.Int64)
                (Data.ProtoLens.PlainField
                   Data.ProtoLens.Optional requestedBytes) ::
                Data.ProtoLens.FieldDescriptor AllocationDescription
          allocatedBytes__field_descriptor
            = Data.ProtoLens.FieldDescriptor
                "allocated_bytes"
                (Data.ProtoLens.Int64Field ::
                   Data.ProtoLens.FieldTypeDescriptor Data.Int.Int64)
                (Data.ProtoLens.PlainField
                   Data.ProtoLens.Optional allocatedBytes) ::
                Data.ProtoLens.FieldDescriptor AllocationDescription
          allocatorName__field_descriptor
            = Data.ProtoLens.FieldDescriptor
                "allocator_name"
                (Data.ProtoLens.StringField ::
                   Data.ProtoLens.FieldTypeDescriptor Data.Text.Text)
                (Data.ProtoLens.PlainField
                   Data.ProtoLens.Optional allocatorName) ::
                Data.ProtoLens.FieldDescriptor AllocationDescription
          ....
        in
          Data.ProtoLens.MessageDescriptor
            (Data.Map.fromList
               [(Data.ProtoLens.Tag 1, requestedBytes__field_descriptor),
                (Data.ProtoLens.Tag 2, allocatedBytes__field_descriptor),
                (Data.ProtoLens.Tag 3, allocatorName__field_descriptor), ....])
            (Data.Map.fromList
               [("requested_bytes", requestedBytes__field_descriptor),
                ("allocated_bytes", allocatedBytes__field_descriptor),
                ("allocator_name", allocatorName__field_descriptor), ....])
      In an equation for ‘Data.ProtoLens.descriptor’:
          Data.ProtoLens.descriptor
            = let
                requestedBytes__field_descriptor = ...
                allocatedBytes__field_descriptor = ...
                ....
              in
                Data.ProtoLens.MessageDescriptor
                  (Data.Map.fromList
                     [(Data.ProtoLens.Tag 1, requestedBytes__field_descriptor),
                      (Data.ProtoLens.Tag 2, allocatedBytes__field_descriptor), ....])
                  (Data.Map.fromList
                     [("requested_bytes", requestedBytes__field_descriptor),
                      ("allocated_bytes", allocatedBytes__field_descriptor), ....])
    |
159 |               Data.ProtoLens.MessageDescriptor
    |               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^...

dist/build/autogen/Proto/Tensorflow/Core/Framework/AllocationDescription.hs:160:18: error:
    • Couldn't match expected type ‘Data.Text.Text’
                  with actual type ‘Data.Map.Map
                                      Data.ProtoLens.Tag
                                      (Data.ProtoLens.FieldDescriptor AllocationDescription)’
    • In the first argument of ‘Data.ProtoLens.MessageDescriptor’, namely
        ‘(Data.Map.fromList
            [(Data.ProtoLens.Tag 1, requestedBytes__field_descriptor),
             (Data.ProtoLens.Tag 2, allocatedBytes__field_descriptor),
             (Data.ProtoLens.Tag 3, allocatorName__field_descriptor),
             (Data.ProtoLens.Tag 4, allocationId__field_descriptor), ....])’
      In the expression:
        Data.ProtoLens.MessageDescriptor
          (Data.Map.fromList
             [(Data.ProtoLens.Tag 1, requestedBytes__field_descriptor),
              (Data.ProtoLens.Tag 2, allocatedBytes__field_descriptor),
              (Data.ProtoLens.Tag 3, allocatorName__field_descriptor),
              (Data.ProtoLens.Tag 4, allocationId__field_descriptor), ....])
          (Data.Map.fromList
             [("requested_bytes", requestedBytes__field_descriptor),
              ("allocated_bytes", allocatedBytes__field_descriptor),
              ("allocator_name", allocatorName__field_descriptor),
              ("allocation_id", allocationId__field_descriptor), ....])
      In the expression:
        let
          requestedBytes__field_descriptor
            = Data.ProtoLens.FieldDescriptor
                "requested_bytes"
                (Data.ProtoLens.Int64Field ::
                   Data.ProtoLens.FieldTypeDescriptor Data.Int.Int64)
                (Data.ProtoLens.PlainField
                   Data.ProtoLens.Optional requestedBytes) ::
                Data.ProtoLens.FieldDescriptor AllocationDescription
          allocatedBytes__field_descriptor
            = Data.ProtoLens.FieldDescriptor
                "allocated_bytes"
                (Data.ProtoLens.Int64Field ::
                   Data.ProtoLens.FieldTypeDescriptor Data.Int.Int64)
                (Data.ProtoLens.PlainField
                   Data.ProtoLens.Optional allocatedBytes) ::
                Data.ProtoLens.FieldDescriptor AllocationDescription
          allocatorName__field_descriptor
            = Data.ProtoLens.FieldDescriptor
                "allocator_name"
                (Data.ProtoLens.StringField ::
                   Data.ProtoLens.FieldTypeDescriptor Data.Text.Text)
                (Data.ProtoLens.PlainField
                   Data.ProtoLens.Optional allocatorName) ::
                Data.ProtoLens.FieldDescriptor AllocationDescription
          ....
        in
          Data.ProtoLens.MessageDescriptor
            (Data.Map.fromList
               [(Data.ProtoLens.Tag 1, requestedBytes__field_descriptor),
                (Data.ProtoLens.Tag 2, allocatedBytes__field_descriptor),
                (Data.ProtoLens.Tag 3, allocatorName__field_descriptor), ....])
            (Data.Map.fromList
               [("requested_bytes", requestedBytes__field_descriptor),
                ("allocated_bytes", allocatedBytes__field_descriptor),
                ("allocator_name", allocatorName__field_descriptor), ....])
    |
160 |                 (Data.Map.fromList
    |                  ^^^^^^^^^^^^^^^^^...

dist/build/autogen/Proto/Tensorflow/Core/Framework/AllocationDescription.hs:167:18: error:
    • Couldn't match type ‘[Prelude.Char]’ with ‘Data.ProtoLens.Tag’
      Expected type: Data.Map.Map
                       Data.ProtoLens.Tag
                       (Data.ProtoLens.FieldDescriptor AllocationDescription)
        Actual type: Data.Map.Map
                       [Prelude.Char]
                       (Data.ProtoLens.FieldDescriptor AllocationDescription)
    • In the second argument of ‘Data.ProtoLens.MessageDescriptor’, namely
        ‘(Data.Map.fromList
            [("requested_bytes", requestedBytes__field_descriptor),
             ("allocated_bytes", allocatedBytes__field_descriptor),
             ("allocator_name", allocatorName__field_descriptor),
             ("allocation_id", allocationId__field_descriptor), ....])’
      In the expression:
        Data.ProtoLens.MessageDescriptor
          (Data.Map.fromList
             [(Data.ProtoLens.Tag 1, requestedBytes__field_descriptor),
              (Data.ProtoLens.Tag 2, allocatedBytes__field_descriptor),
              (Data.ProtoLens.Tag 3, allocatorName__field_descriptor),
              (Data.ProtoLens.Tag 4, allocationId__field_descriptor), ....])
          (Data.Map.fromList
             [("requested_bytes", requestedBytes__field_descriptor),
              ("allocated_bytes", allocatedBytes__field_descriptor),
              ("allocator_name", allocatorName__field_descriptor),
              ("allocation_id", allocationId__field_descriptor), ....])
      In the expression:
        let
          requestedBytes__field_descriptor
            = Data.ProtoLens.FieldDescriptor
                "requested_bytes"
                (Data.ProtoLens.Int64Field ::
                   Data.ProtoLens.FieldTypeDescriptor Data.Int.Int64)
                (Data.ProtoLens.PlainField
                   Data.ProtoLens.Optional requestedBytes) ::
                Data.ProtoLens.FieldDescriptor AllocationDescription
          allocatedBytes__field_descriptor
            = Data.ProtoLens.FieldDescriptor
                "allocated_bytes"
                (Data.ProtoLens.Int64Field ::
                   Data.ProtoLens.FieldTypeDescriptor Data.Int.Int64)
                (Data.ProtoLens.PlainField
                   Data.ProtoLens.Optional allocatedBytes) ::
                Data.ProtoLens.FieldDescriptor AllocationDescription
          allocatorName__field_descriptor
            = Data.ProtoLens.FieldDescriptor
                "allocator_name"
                (Data.ProtoLens.StringField ::
                   Data.ProtoLens.FieldTypeDescriptor Data.Text.Text)
                (Data.ProtoLens.PlainField
                   Data.ProtoLens.Optional allocatorName) ::
                Data.ProtoLens.FieldDescriptor AllocationDescription
          ....
        in
          Data.ProtoLens.MessageDescriptor
            (Data.Map.fromList
               [(Data.ProtoLens.Tag 1, requestedBytes__field_descriptor),
                (Data.ProtoLens.Tag 2, allocatedBytes__field_descriptor),
                (Data.ProtoLens.Tag 3, allocatorName__field_descriptor), ....])
            (Data.Map.fromList
               [("requested_bytes", requestedBytes__field_descriptor),
                ("allocated_bytes", allocatedBytes__field_descriptor),
                ("allocator_name", allocatorName__field_descriptor), ....])
    |
167 |                 (Data.Map.fromList
    |                  ^^^^^^^^^^^^^^^^^...
cabal: Leaving directory '/var/folders/2n/wsfq37nn2dv23rx8wcsn9d6m0000gn/T/cabal-tmp-49665/tensorflow-proto-0.1.0.0'
cabal: Error: some packages failed to install:
hs-ml-0.1.0.0-363V9CZ7RT7C3io8MyqwFB depends on hs-ml-0.1.0.0 which failed to
install.
tensorflow-0.1.0.2-7XYTRIMIoHSHYi9V7yt0q0 depends on tensorflow-0.1.0.2 which
failed to install.
tensorflow-proto-0.1.0.0-D7saAu502llB6QqUq2V7pm failed during the building
phase. The exception was:
ExitFailure 1
@fkm3
Copy link
Contributor

fkm3 commented Apr 9, 2018

I see you are using cabal. What version of proto-lens do you have installed? We are pegged at version proto-lens-0.2.2.0 and don't support the newest version yet, so I suspect that is the issue.

@jyp
Copy link

jyp commented May 15, 2018

@fkm3 I don't think that your proposed explanation is correct. I have the same issue when trying to build with nixos 18.03, it pegs proto-lens-0.2.2.0 too.

@fkm3
Copy link
Contributor

fkm3 commented May 16, 2018

Are you getting this same error? Or, are you getting the error reported in #190?

@awpr
Copy link
Contributor

awpr commented May 18, 2018

This is also being discussed on haskell-cafe: https://mail.haskell.org/pipermail/haskell-cafe/2018-May/129103.html

I'm not sure what exactly to blame, but a bunch of weird things are coinciding to bring you this error:

  • The uploaded tarball tensorflow-proto-0.1.0.0 contains the auto-generated .hs sources for the contained protos, as generated by (seemingly) proto-lens-protoc <0.2.2
  • proto-lens changed some internal types before proto-lens-0.2.2.0 that breaks compatibility with the generated code.
  • Cabal is trying to compile the generated sources from the tarball rather than regenerating them with proto-lens-protoc, so it fails to build when proto-lens-0.2.2.* is selected.

If I 'cabal get' the package, delete the generated Haskell modules, and repackage it (with tar --format=ustar, since everything else makes cabal explode), it does actually start working again (Cabal-2.0.1.1, proto-lens-protoc-0.2.2.3, proto-lens-0.2.2.0), because the chosen version of proto-lens-protoc is used to generate the modules.

Unfortunately cabal and stack become very unhappy if you try to 'sdist' packages that mention Haskell files that don't exist, so the release process for packages that include modules generated by proto-lens is to generate the code, then 'sdist' them, which means the generated code is included in the uploads.

It might be appropriate to update the version bounds on the affected packages to require a version of proto-lens that's compatible with the included generated code (in this case, proto-lens <0.2.2) -- but if older versions of cabal can still compile against other versions, this would be unnecessarily restrictive.

It might also make sense to work around this by releasing a new tensorflow-proto version that's had the generated modules removed, and mark 0.1.0.0 as deprecated.

judah added a commit to judah/proto-lens that referenced this issue May 20, 2018
This makes it possible to generate tarballs without generated modules
(which would be regenerated anyway when the package is build).
See tensorflow/haskell#180 for an example of the issues that causes.
judah added a commit to judah/proto-lens that referenced this issue May 21, 2018
This makes it possible to generate tarballs without generated modules
(which would be regenerated anyway when the package is build).
See tensorflow/haskell#180 for an example of the issues that causes.

Unfortunately, this required us stopping using `hpack` in our packages
that generate proto files.  `hpack`, when you use the `autogen-modules`
field,
`cabal-version: >= 2.0` when the file

I mitigated the situation a little by changing the Cabal test script to
only try to `sdist` packages that we're planning to release.
judah added a commit to judah/proto-lens that referenced this issue May 21, 2018
This makes it possible to generate tarballs without generated modules
(which would be regenerated anyway when the package is build).
See tensorflow/haskell#180 for an example of the issues that causes.

For `Cabal-1.*`, this continues the behavior as before.

Unfortunately, `hpack` requires `cabal-version: >=2.0` when you use
its `generated-modules` or `generated-other-modules` fields.
Our current set of LTSes that we support still includes `Cabal-1.*`
(which I think is correct).  Luckily we could work around that
using `hpack`'s `verbatim` field to accomplish the same thing a little
more verbosely.  Additionally, I mitigated the situation a little by changing
the Cabal test script to not `sdist` packages that we're not releasing
(`proto-lens-{tests/benchmarks}`).
judah added a commit to judah/proto-lens that referenced this issue May 21, 2018
This makes it possible to generate tarballs without generated modules
(which would be regenerated anyway when the package is build).
See tensorflow/haskell#180 for an example of the issues that causes.

For `Cabal-1.*`, this continues the behavior as before.

Unfortunately, `hpack` requires `cabal-version: >=2.0` when you use
its `generated-modules` or `generated-other-modules` fields.
Our current set of LTSes that we support still includes `Cabal-1.*`
(which I think is correct).  Luckily we could work around that
using `hpack`'s `verbatim` field to accomplish the same thing a little
more verbosely.  Additionally, I mitigated the situation a little by changing
the Cabal test script to not `sdist` packages that we're not releasing
(`proto-lens-{tests/benchmarks}`).

This change required bumping `stack` to `1.7.1` in order to get new enough versions of
`Cabal` and `hpack`.
judah added a commit to judah/proto-lens that referenced this issue May 21, 2018
This makes it possible to generate tarballs without generated modules
(which would be regenerated anyway when the package is build).
See tensorflow/haskell#180 for an example of the issues that causes.

For `Cabal-1.*`, this continues the behavior as before.

Unfortunately, `hpack` requires `cabal-version: >=2.0` when you use
its `generated-modules` or `generated-other-modules` fields.
Our current set of LTSes that we support still includes `Cabal-1.*`
(which I think is correct).  Luckily we could work around that
using `hpack`'s `verbatim` field to accomplish the same thing a little
more verbosely.  Additionally, I mitigated the situation a little by changing
the Cabal test script to not `sdist` packages that we're not releasing
(`proto-lens-{tests/benchmarks}`).

This change required bumping `stack` to `1.7.1` in order to get new enough
versions of `Cabal` and `hpack`.  Happily, it greatly simplifies
the steps for releasing our packages.
judah added a commit to judah/proto-lens that referenced this issue May 21, 2018
This makes it possible to generate tarballs without generated modules
(which would be regenerated anyway when the package is build).
See tensorflow/haskell#180 for an example of the issues that causes.

For `Cabal-1.*`, this continues the behavior as before.

Unfortunately, `hpack` requires `cabal-version: >=2.0` when you use
its `generated-modules` or `generated-other-modules` fields.
Our current set of LTSes that we support still includes `Cabal-1.*`
(which I think is correct).  Luckily we could work around that
using `hpack`'s `verbatim` field to accomplish the same thing a little
more verbosely.  Additionally, I mitigated the situation a little by changing
the Cabal test script to not `sdist` packages that we're not releasing
(`proto-lens-{tests/benchmarks}`).

This change required bumping `stack` to `1.7.1` in order to get new enough
versions of `Cabal` and `hpack`.  Happily, it greatly simplifies
the steps for releasing our packages.
judah added a commit to judah/proto-lens that referenced this issue May 21, 2018
This makes it possible to generate tarballs without generated modules
(which would be regenerated anyway when the package is build).
See tensorflow/haskell#180 for an example of the issues that causes.

For `Cabal-1.*`, this continues the behavior as before.

Unfortunately, `hpack` requires `cabal-version: >=2.0` when you use
its `generated-modules` or `generated-other-modules` fields.
Our current set of LTSes that we support still includes `Cabal-1.*`
(which I think is correct).  Luckily we could work around that
using `hpack`'s `verbatim` field to accomplish the same thing a little
more verbosely.  Additionally, I mitigated the situation a little by changing
the Cabal test script to not `sdist` packages that we're not releasing
(`proto-lens-{tests/benchmarks}`).

This change required bumping `stack` to `1.7.1` in order to get new enough
versions of `Cabal` and `hpack`.  Happily, it greatly simplifies
the steps for releasing our packages.
@judah
Copy link
Contributor

judah commented May 21, 2018

@awpr thanks for pinpointing the problem with the sdist file. I've created google/proto-lens#185 to use the autogen-modules field from Cabal-2.0, which lets us avoid including the generated files in the release tarball.

For tensorflow-proto, given that it's still using proto-lens-0.2.*, in the short term it might make sense to do a new manual release (as you suggested) with the generated files removed, using autogen-modules, and requiring cabal-version: >=2.0.

judah added a commit to judah/proto-lens that referenced this issue May 22, 2018
This makes it possible to generate tarballs without generated modules
(which would be regenerated anyway when the package is build).
See tensorflow/haskell#180 for an example of the issues that causes.

For `Cabal-1.*`, this continues the behavior as before.

Unfortunately, `hpack` requires `cabal-version: >=2.0` when you use
its `generated-modules` or `generated-other-modules` fields.
Our current set of LTSes that we support still includes `Cabal-1.*`
(which I think is correct).  Luckily we could work around that
using `hpack`'s `verbatim` field to accomplish the same thing a little
more verbosely.  Additionally, I mitigated the situation a little by changing
the Cabal test script to not `sdist` packages that we're not releasing
(`proto-lens-{tests/benchmarks}`).

This change required bumping `stack` to `1.7.1` in order to get new enough
versions of `Cabal` and `hpack`.  Happily, it greatly simplifies
the steps for releasing our packages.
judah added a commit to google/proto-lens that referenced this issue May 22, 2018
* Fix #185: Support and use autogen-modules.

This makes it possible to generate tarballs without generated modules
(which would be regenerated anyway when the package is build).
See tensorflow/haskell#180 for an example of the issues that causes.

For `Cabal-1.*`, this continues the behavior as before.

Unfortunately, `hpack` requires `cabal-version: >=2.0` when you use
its `generated-modules` or `generated-other-modules` fields.
Our current set of LTSes that we support still includes `Cabal-1.*`
(which I think is correct).  Luckily we could work around that
using `hpack`'s `verbatim` field to accomplish the same thing a little
more verbosely.  Additionally, I mitigated the situation a little by changing
the Cabal test script to not `sdist` packages that we're not releasing
(`proto-lens-{tests/benchmarks}`).

This change required bumping `stack` to `1.7.1` in order to get new enough
versions of `Cabal` and `hpack`.  Happily, it greatly simplifies
the steps for releasing our packages.

* More README updates
@PI-Victor
Copy link

PI-Victor commented Jun 14, 2018

i've tried all sorts of workarounds for this on MacOs and Fedora, i assume that this is still an issue and not fixed yet? Or do i have to pull in some specific protoc/proto-lens versions. unfortunately i'm a beginner in the haskell ecosystem and don't understand how to make it compile.
it's a shame, i was really looking forward to using this with jupyter + iHaskell.

@fkm3
Copy link
Contributor

fkm3 commented Aug 3, 2018

I've released a new version to hackage, so the generated files should align with the newest proto-lens version that we claim to support. Does that fix the issue your experiencing?

I'd have done what @awpr and @judah suggested, but I'm not too familiar with the intricacies of cabal, so I was afraid I'd do more harm than good.

avdv pushed a commit to avdv/proto-lens that referenced this issue Aug 9, 2023
* Fix google#185: Support and use autogen-modules.

This makes it possible to generate tarballs without generated modules
(which would be regenerated anyway when the package is build).
See tensorflow/haskell#180 for an example of the issues that causes.

For `Cabal-1.*`, this continues the behavior as before.

Unfortunately, `hpack` requires `cabal-version: >=2.0` when you use
its `generated-modules` or `generated-other-modules` fields.
Our current set of LTSes that we support still includes `Cabal-1.*`
(which I think is correct).  Luckily we could work around that
using `hpack`'s `verbatim` field to accomplish the same thing a little
more verbosely.  Additionally, I mitigated the situation a little by changing
the Cabal test script to not `sdist` packages that we're not releasing
(`proto-lens-{tests/benchmarks}`).

This change required bumping `stack` to `1.7.1` in order to get new enough
versions of `Cabal` and `hpack`.  Happily, it greatly simplifies
the steps for releasing our packages.

* More README updates
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants