-
-
Notifications
You must be signed in to change notification settings - Fork 35.2k
Closed
Labels
feature requestIssues that request new features to be added to Node.js.Issues that request new features to be added to Node.js.memoryIssues and PRs related to the memory management or memory footprint.Issues and PRs related to the memory management or memory footprint.moduleIssues and PRs related to the module subsystem.Issues and PRs related to the module subsystem.stale
Description
- Version: 6.5.0
- Platform: FreeBSD 11.0 BETA1 x64
- Subsystem: module, require
I have a function that uncaches a module and its children:
function uncacheTree(root) {
let uncache = [require.resolve(root)];
do {
let newuncache = [];
for (let i = 0; i < uncache.length; ++i) {
if (require.cache[uncache[i]]) {
newuncache.push.apply(newuncache,
require.cache[uncache[i]].children
.filter(cachedModule => !cachedModule.id.endsWith('.node'))
.map(cachedModule => cachedModule.id)
);
delete require.cache[uncache[i]];
}
}
uncache = newuncache;
} while (uncache.length > 0);
};After running it, I require the module that it uncaches again. When I found my server running out of memory in a few days consistently, I found that after taking a coredump, all of the exported objects from the modules I hotpatched stayed on the heap according to mdb_v8. What's causing this to leak like that? Is there any way it can be fixed? I'm not sure if it's something wrong with node itself or just something hiding out in the code holding references to all the zombie exports.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
feature requestIssues that request new features to be added to Node.js.Issues that request new features to be added to Node.js.memoryIssues and PRs related to the memory management or memory footprint.Issues and PRs related to the memory management or memory footprint.moduleIssues and PRs related to the module subsystem.Issues and PRs related to the module subsystem.stale