OpenFAST Simulation Time Ratio (Sim/CPU)

Hi,

Does anyone have any recommendations for faster simulation times? I’ve currently compiled everything inside an Ubuntu-based Docker container. Is speed compiler dependent? At the moment I can get around 0.333-0.5 on Sim/CPU. Can one sacrifice accuracy on small time/spatial scales but still be confident in broader interpretations of the results, such as standard-deviation of marco signals like foundation moments etc? I’m principally using ElastoDyn, AeroDyn and ServoDyn (ROSCO controller) at the moment, although eventually I’ll also need to use SubDyn and HydroDyn.

Can one adjust NumCrctn/DT in such a way as to signicantly speed up simulations whilst maintaining accuracy and stability?

Finally, for batch processing of simulations, what typical Sim/CPU values can one expect to obtain?

Thanks,
Sam

Hi Sam,

Certainly the computational speed of OpenFAST will depend on the compiler and optimization settings you’ve used. I would normally recommend using an Intel compiler and O2 (maximum speed) optimization.

Independent of that, the computational expense will be impacted by the features you’ve enabled and the time step and number of corrections you’ve selected.

This topic has been discussed in prior forum topics, e.g. see: http://forums.nrel.gov/t/fast-computation-time-and-improvement/1235/10.

Best regards,