Skip to content
Discussion options

You must be logged in to vote

Thanks for your question!

Does anyone have experience running thousands of nodes?

Yes! Shadow is routinely used for simulations with many thousands of nodes. However, it's usually done on machines with many more resources than the one you've posted.

and the shared memory probably cannot be swapped out?

I do believe that Linux kernel memory can't be paged out [0], and simulating many processes can consume a decent amount of memory in the kernel per process (a task_struct kernel object alone might consume 1-2 kilobytes per process). Also, you might need to adjust your machine's virtual memory parameters before the right swapping behavior is achieved [1].

I created some swap file of 230G

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@sporksmith
Comment options

@haxxpop
Comment options

@sporksmith
Comment options

Answer selected by robgjansen
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants