-
Notifications
You must be signed in to change notification settings - Fork 969
Description
Derived from discussion in #1541
Describe the bug
The output field WALL_TIME is supposed to hold the
Current average wall-clock time for one iteration
as per the documentation and dry-run output.
However, the screen and history outputs of this field do not seem to actually represent this value. The screen-output of the 0th inner iteration of each time-iteration seems to represent the accumulated WCT since the last output (or restart file) was written. What the screen- and history-output of other inner iterations within a timestep mean is not obvious. See attached discussion for more details.
Expected behaviour
During transient simulations the WALL_TIME history output should either be the average WCT for one time-iteration or the accumulated WCT since simulation start.
The screen output of WALL_TIME should either be the average WCT for a time-iteration ( thus the same for all inner iterations ) or (better) correctly compute the average WCT of one inner iteration.
To Reproduce
See attached mesh and config, but any transient simulation with activated WALL_TIME and implicit time integration should do.
MeshAndConfig.zip
Desktop (please complete the following information):
- OS: SLES 15
- C++ compiler and version: icpc (intel 19.0.5.281)
- MPI implementation and version: intelMPI 2019
- SU2 Version: 7.3.0 (master/develop)
Originally posted by ChristianBauerEng February 8, 2022
Hi All!
I'm currently running transient, 2D axisymmetric simulations of an oscillating resonator. For now I'm still in the validation and testing phase, but the results look promising!
However, I've noticed that solver performance (as in Time/Iteration) seems to degrade over runtime, so I wanted to measure that by including WALL_TIME in screen and history output. I still have trouble understanding the outputs, though.
The screen output looks like this:
+-------------------------------------
| Time_Iter| Inner_Iter| Time(sec)
+-------------------------------------
| 45330| 0| 6.3412e+02
| 45330| 5| 1.0572e+02
| 45330| 10| 5.7680e+01
+-------------------------------------
| Time_Iter| Inner_Iter| Time(sec)
+-------------------------------------
| 45340| 0| 6.3783e+02
| 45340| 5| 1.0633e+02
| 45340| 9| 6.3815e+01
The Time(sec) values of the 0th inner iteration seems to monotonically increase and represent the accumulated wall clock time since simulation start I assume. But what do the Time(sec) values for the inner iterations mean?
In the history output, the Time(sec) field looks different, as can be seen in the upper part of this figure:
In the history file, actually the last Time(sec) value of each time-iteration is printed, which also tends to increase, but not monotonically. Additionally, it seems to get reset to 0 each time output is written:
What does this value in the history output actually mean? Is it time/iteration? I've noticed a performance degradation over time, but not to this extent (I've monitored the actual runtime only sporadically, though).
If the performance degradation is in fact so severe, what could be the cause?
I'd really be happy if one of the veterans could chip in their ideas here!
