Skip to content

Integrate with the new go-ipld-format decoder framework#14

Merged
whyrusleeping merged 3 commits intoipfs:masterfrom
Stebalien:feat/ipld-format-decoder
Jun 20, 2017
Merged

Integrate with the new go-ipld-format decoder framework#14
whyrusleeping merged 3 commits intoipfs:masterfrom
Stebalien:feat/ipld-format-decoder

Conversation

@Stebalien
Copy link
Copy Markdown
Member

Also, pre-compute the CID. We're probably going to need it anyways. The alternatives are:

  1. Compute it every time we need it (slow).
  2. Cache it when we first compute it (requires synchronization for thread safety).

@coveralls
Copy link
Copy Markdown

coveralls commented Jun 19, 2017

Coverage Status

Coverage increased (+0.07%) to 57.215% when pulling 3388947 on Stebalien:feat/ipld-format-decoder into a8c7e8c on ipfs:master.

@Stebalien Stebalien force-pushed the feat/ipld-format-decoder branch from 3388947 to 4b04798 Compare June 19, 2017 22:37
@coveralls
Copy link
Copy Markdown

coveralls commented Jun 19, 2017

Coverage Status

Coverage decreased (-0.05%) to 63.291% when pulling 4b04798 on Stebalien:feat/ipld-format-decoder into c13297e on ipfs:master.

(and update dependencies)
Also, pre-compute the CID. We're probably going to need it anyways. The
alternatives are:

1. Compute it every time we need it (slow).
2. Cache it when we first compute it (requires synchronization for thread
   safety).
@Stebalien Stebalien force-pushed the feat/ipld-format-decoder branch from 4b04798 to e0e7215 Compare June 20, 2017 00:35
@coveralls
Copy link
Copy Markdown

coveralls commented Jun 20, 2017

Coverage Status

Coverage decreased (-0.05%) to 63.291% when pulling e0e7215 on Stebalien:feat/ipld-format-decoder into c13297e on ipfs:master.

@coveralls
Copy link
Copy Markdown

coveralls commented Jun 20, 2017

Coverage Status

Coverage decreased (-0.05%) to 63.291% when pulling 9fd3ab9 on Stebalien:feat/ipld-format-decoder into c13297e on ipfs:master.

Copy link
Copy Markdown
Member

@whyrusleeping whyrusleeping left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚢

@whyrusleeping whyrusleeping merged commit ec62d7b into ipfs:master Jun 20, 2017
@whyrusleeping
Copy link
Copy Markdown
Member

Ah, this breaks our tests that assert cbor values get normalized to their canonical values when you decode a block from them.

@Stebalien what do you think?

@magik6k
Copy link
Copy Markdown
Member

magik6k commented Jun 29, 2017

@Stebalien
Copy link
Copy Markdown
Member Author

IMO, it's more important to avoid changing the CID than to normalize. I'd remove the tests (but I'm probably missing prior discussion on this topic).

@whyrusleeping
Copy link
Copy Markdown
Member

We can either normalize, or throw an error. Otherwise we can have two objects that are equivalent by value comparison, but have different 'hashes', which is an issue.

@Stebalien
Copy link
Copy Markdown
Member Author

Stebalien commented Jun 29, 2017

My primary concern is that block and DecodeBlock(block).(Block) should be interchangeable. Otherwise, I can ask a DAGService for a node with one CID and get back a node with a different CID (because the encoding was normalized under the hood).

However, looking back at this commit, I believe the problem is that Decode takes data that doesn't have a CID attached to it, produces a Block, and then turns that into a Node. Given that the data doesn't already have a CID, there's no problem in normalizing it first.

However, once something has a CID, I think we either need to keep it as-is or throw an error. I'm all for strict validation that fails early but I'm worried that if we always re-encode before generating the CID, we'll never be able to upgrade our CBOR library (also, efficiency).


As an aside, IMO, all comparison of blocks/nodes should compare CIDs and only CIDs.

@whyrusleeping
Copy link
Copy Markdown
Member

However, once something has a CID, I think we either need to keep it as-is or throw an error.

Agreed with this, i think any place where we take in raw data (not a block) should do normalization validation.

@Stebalien
Copy link
Copy Markdown
Member Author

Agreed with this, i think any place where we take in raw data (not a block) should do normalization validation.

Although remember, we may be taking in raw data through a user interface where the user will want us to normalize, not validate (e.g., JSON). Your comment on IRC about having a strict mode versus a non-strict mode is on-point.

@Stebalien Stebalien mentioned this pull request Jun 30, 2017
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants