Skip to content

Reuse object/data memory for replication workers #2316

@roman-khimov

Description

@roman-khimov

Is your feature request related to a problem? Please describe.

I'm always frustrated when I'm looking at the replication worker code. It takes and object from the storage, decompresses it, unmarshals it and then pushes the result to some other node. It allocates for the raw data, it allocates for the decompressed data, it allocates for the object and all of its fields. For big objects this means a heck of a lot of allocations.

Describe the solution you'd like

Have one per-replicator data buffer for raw data, one for decompressed data, one object. Reuse them. Likely this is not supported by our APIs at the moment, but this can probably be changed.

Additional context

#2300/#2178

Metadata

Metadata

Assignees

No one assigned

    Labels

    I4No visible changesS2Regular significanceU4Nothing urgentenhancementImproving existing functionalityneofs-storageStorage node application issuesperformanceMore of something per second

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions