feat: Entity registration HTTP API#3230
Conversation
Robot Results
|
| @@ -1,4 +1,6 @@ | |||
| //TODO Rename this module to tedge_server | |||
There was a problem hiding this comment.
I would prefer http_server, stressing this is about HTTP while tedge is obvious in the context of tedge-agent.
There was a problem hiding this comment.
Will be renamed as suggested, right before merging this PR.
| //TODO Rename this module to tedge_server | ||
| pub mod actor; | ||
| pub mod error; | ||
| mod file_transfer_controller; |
There was a problem hiding this comment.
Why not simply file_transfer?
There was a problem hiding this comment.
I've renamed it as you suggested.
It actually started with entity_store_controller, as I didn't want to name it entity_store to differentiate this from the original entity store and hence ended up adding the controller suffix. Did the the same for file_transfer as well.
But, now I'm wondering if naming them file_transfer_service and entity_store_service would have been clearer/better.
e0b458e to
2151a99
Compare
didier-wenzek
left a comment
There was a problem hiding this comment.
I suggest to move the responsibilities of managing the entity store from the file_transfer_service to the entity_manager, forwarding HTTP entity registration to the latter.
| /// Creates a entity metadata for the main device. | ||
| pub fn main_device(device_id: String) -> Self { | ||
| Self { | ||
| topic_id: EntityTopicId::default_main_device(), |
There was a problem hiding this comment.
This is out-of-scope of this PR, but we must really find a way to support non default topic-id schema.
| } | ||
|
|
||
| #[derive(Debug, Clone, PartialEq, Eq)] | ||
| pub struct EntityMetadata { |
There was a problem hiding this comment.
There are now two slightly different definitions of struct EntityMetadata; the former definition has been kept in entity_store.rs.
What's the plan?
There was a problem hiding this comment.
As mentioned in #3230 (comment), the entity store and entity cache needs to maintain their own versions of entity metadata as the latter would be much lighter with minimal metadata maintained at the mapper level. Ideally the EntityMetadata maintained by the mapper should have been as minimal as the XID. But, I was forced to store a bit more info like the entity type, display name and display type just because of the way we map service health status messages by publishing the service creation message itself(which requires all that metadata) for status updates as well. One way to eliminate caching this additional metadata at the mapper level is by making it query the entity store contents from the agent, whenever it requires all that metadata. But that extra HTTP call during the mapping complicates the conversion logic and I just chose caching extra metadata instead of that added complexity. But, open to revisit that decision in a follow-up PR dedicated for that change as the unit test impact would be quite big.
There was a problem hiding this comment.
But that extra HTTP call during the mapping complicates the conversion logic and I just chose caching extra metadata instead of that added complexity.
Agree, caching registration messages is simpler than late-requests to the entity store.
But could we share the type of cached data? Say raw registration messages? Even if some fields are not currently used in the mapper.
There was a problem hiding this comment.
But could we share the type of cached data? Say raw registration messages? Even if some fields are not currently used in the mapper.
Yeah, that is an option. I introduced a different struct to clearly represent the minimal set of things that a mapper would need to do the mapping. And also, there were minor things like the external id (@id) being an Option in the actual EntityRegistrationMessage, but the same being a mandatory field in the entity cache. But yeah, if we want to avoid having too many "metadata representations" across the crates, I can reuse some of the existing ones with implicit assumptions about fields like @id.
There was a problem hiding this comment.
We really have to reduce the number of similar but different structs.
There was a problem hiding this comment.
Resolved by 8751a76 . Even though a single struct is easier to maintain, I'm still tempted to introduce a dedicated EntityMetadataDto struct to capture the message payloads, without overloading this internal EntityMetadata representation with serde aspects. By using the same struct for everything, I'm a bit worried that it'll make future changes to this internal metadata representation more difficult as it'd affect the external representation as well.
There was a problem hiding this comment.
I don't fully get your concerns.
- The extra lines for serde are minimal and only related to naming.
- Is this really using the same struct for everything? These metadata are exchanged over MQTT and stored in-memory. What's wrong?
db17d27 to
a3fa213
Compare
The implementation of
Fixed.
Fixed by properly handling the MQTT "clear" messages published by the agent, which are re-delivered to the agent and the mapper after a successful deletion over HTTP.
The |
There was a problem hiding this comment.
This PR can be merged once renamed the file_transfer_server module.
Thank you for the sustained effort!
Required follow-up tasks:
- PUT and PATCH implementation
- Query over registered entities
- Publishing entity properties as twin metadata
- Propagating entity deletions to the cloud, controlled by a config setting
66827e3 to
bcc67e2
Compare
Proposed changes
entity_storeusage from the mappers, extracting only the external ID aspects from itfile_transfer_servermodule intotedge_serverTypes of changes
Paste Link to the issue
Checklist
cargo fmtas mentioned in CODING_GUIDELINEScargo clippyas mentioned in CODING_GUIDELINESFurther comments