-
-
Notifications
You must be signed in to change notification settings - Fork 404
Description
I want to move over some of the discussion from this ticket this ticket over here.
I've been poking around the Synfig source code and have found a few spots where instructions can be reordered & rewritten to better take advantage of SIMD optimizations. To verify that it actually is the case, I wrote two python scripts.
The first does a few render passes on all of the .sif files from the synfig-tests repo, then dumps the results into a CSV file. The second script will look at two CSV files and then chart them (using Pyplot) to do a basic comparison (e.g. mean run time, error variance, etc). The idea is that you do a run first which acts as the "reference run". Then you do some code changes and then do a comparison.
Would you like me to submit a PR for these scripts or do you just want a gist of them? I think they could be helpful for anyone else who wants to do perf profiling. If you want to merge them, where should they be placed?
Right now the "measurement" script needs to be run from the build directory. And the synfig-tests repo needs to be cloned into that directory as well. The "comparison graph" script just needs to be supplied two CSV output files from the measurement script.
There are more changes that I want to add in to the scripts, but for an initial version, I'd like to get them in.
I did also make one (very) miniscule perf adjustment that I'd like to submit. I'll put that in as it's own PR. Do you need me to provide any proof (such as a chart) showing the performance measurement?