Dear OpenFAST team,
I’m running different simulations in OpenFAST in order to understand how the global time step (DT) influences in terms of stability.
My simulation runs stably with DT=0.005 s. Since my goal was to achieve a smoother controller response, I increased it:
- With DT=0.006 s, the simulation still runs fine. From that value upwards, the simulation stops: it doesn’t crash with an explicit error but simply stops progressing.
When this happened to me, I searched the forum for a similar problem, which led me to this thread: PTDM implementation - #15 by Emanuel.Rergis . After reading it, I understood two things;
- The standard rule for an optimal DT is DT<1/(10*f_max)
- Increasing NumCrctn to 1 or more may allow the use of larger time steps. However, even when I set NumCrctn=1 (keeping DT>0.006), the simulation still stops (the 0.005 and 0.006 DT simulations were run dith NumCrctn=0 successfully).
In parallel, I noticed that the log prints this warning when the DT is higher than 0.006:
BEMT_UpdateStates: UA_UpdateStates: Divergent states in UA HGM model
So, I would like to ask:
-
When you refer to f_max, do you mean the highest frequency of the entire system, regardless of the component?
-
Could you please clarify how DT and NumCrctn interact numerically? Are there recommended or practical ranges for NumCrctn when increasing DT?
-
Could this warning message be linked to an inadequate DT - NumCrctn combination?
Any recommendation would be appreciated.
Kind regards,
Kepa