Thanks to @ZedThree, I can now build the docs for the library I work on. However, in my testing I was always setting graph: false because the graphs took a while. But, now that v6.1.12 is out, I can look to the fancier bits of FORD.
So, my first test was hardware-based. I mainly develop on my MacBook Pro (8-core, Coffee Lake, 32 GB RAM), but I also can build on a node of the cluster I work on (48-core, Cascade Lake, 192 GB RAM). I also decided to try out generating graphs on GitHub Actions (2-core, 7 GB RAM per this page). (ETA: I added an M1 MacBook I have access to. Still quite slow, but 2x faster than my Intel MacBook. So odd.)
Between these three (for the "graph" column, it was set to maxdepth: 4, maxnodes: 32):
| Machine |
No Graph |
Graph |
| M1 MacBook |
00:01:06 |
00:43:20 |
| Intel MacBook |
00:01:44 |
01:55:58 |
| Cluster |
00:03:48 |
00:16:42 |
| GitHub |
00:01:43 |
00:06:34 |
Yes. Almost 2 hours on my laptop with graphs on. That is...weird. I mean, yes, my laptop isn't as good as a compute node of a cluster, yeah, but that amazingly worse? And if it was number of cores or memory, you'd think GitHub Actions would lose. (Also a bit surprised about the slow no-graph on the cluster, but it is a shared disk so sometimes file-generation can be slower.)
Second, looking at the docs, it seems like graph_maxnodes and graph_maxdepth might be tuning knobs to try and make graph generation cheaper. So, since the cluster is doing "okay", I did some tests and:
| maxdepth |
maxnodes |
time |
| 2 |
16 |
15:57 |
| 4 |
32 |
16:20 |
| 8 |
64 |
16:23 |
| 16 |
128 |
17:15 |
| 1024 |
8192 |
28:18 |
So, for now at least, if I don't go crazy, these two knobs don't do too much.
So yeah...any idea why my MacBook is amazingly slower than the cluster or a GitHub Actions VM when turning on graphs?
Thanks to @ZedThree, I can now build the docs for the library I work on. However, in my testing I was always setting
graph: falsebecause the graphs took a while. But, now that v6.1.12 is out, I can look to the fancier bits of FORD.So, my first test was hardware-based. I mainly develop on my MacBook Pro (8-core, Coffee Lake, 32 GB RAM), but I also can build on a node of the cluster I work on (48-core, Cascade Lake, 192 GB RAM). I also decided to try out generating graphs on GitHub Actions (2-core, 7 GB RAM per this page). (ETA: I added an M1 MacBook I have access to. Still quite slow, but 2x faster than my Intel MacBook. So odd.)
Between these three (for the "graph" column, it was set to maxdepth: 4, maxnodes: 32):
Yes. Almost 2 hours on my laptop with graphs on. That is...weird. I mean, yes, my laptop isn't as good as a compute node of a cluster, yeah, but that amazingly worse? And if it was number of cores or memory, you'd think GitHub Actions would lose. (Also a bit surprised about the slow no-graph on the cluster, but it is a shared disk so sometimes file-generation can be slower.)
Second, looking at the docs, it seems like
graph_maxnodesandgraph_maxdepthmight be tuning knobs to try and make graph generation cheaper. So, since the cluster is doing "okay", I did some tests and:So, for now at least, if I don't go crazy, these two knobs don't do too much.
So yeah...any idea why my MacBook is amazingly slower than the cluster or a GitHub Actions VM when turning on graphs?