I have a dilemma where I am trying to automate running FAST through several thousand iterations. But my options are limited because the computing time for my model is very long. Is there any information on how each individual module (Elastodyn, Aerodyn, Hydrodyn, etc) contributes to large computing time? What can be done in general to reduce computing time without turning these modules off assuming they are the reason for the slow computing time.
My current FAST simulation has a very bad time ratio (sim/cpu less than 1.0).
It appears that the computing time is very much depending on the compiler used. Our experience shows differences betwenn Linux Gfortran and intel fortran but also different versions of intel fortran have different speed. It is worth trying.
Try increasing the time step of the solver if possible.
For very large number of load cases, we use HT Condor and distribute with this open source software the single load cases within our network with several workstations. This really boosts the computation of full load sets.
Dear Kan Ito,
FAST v8 does not currently write out information regarding how long each module contribution to the total simulation length. However, you can find such information through the use of a profiler.
In addition to the compiler and time steps recommend by Florian, you can also play with the structural discretization, including the:
- Number of blade and tower structural-analysis nodes and enabled degrees-of-freedom (DOFs) in ElastoDyn
- Number of structural analysis nodes and retained Craig-Bampton modes in SubDyn
- Number of blade and tower aerodynamic analysis nodes in AeroDyn
- Number of hydrodynamic analysis nodes in the strip-theory solution of HydroDyn
With the number of nodes, there is always a trade-off between accuracy and computational speed.
as already mentioned, this depends on your turbine configuration and your accuracy demands. However, if you regard multi-member support structures, HydroDyn is the most decelerating module in FAST. You should perform a preliminary study whether it is possible to neglect wave loads (often possible for smaller jackets and fatigue load cases). If so, you can speed up one simulation by a factor of 10.
I can see why Hydrodyn would be the reason for a large consumption for computation. I am simulating the NREL OC3 5MW Monopile turbine so taking out the HydroDyn module is not an option because I am interested in the coupling effects of the water and wind. I am running the HydroDyn standalone module to pinpoint which part of this module is so computation heavy. However, when I run the HydroDYn standalone, the simulation time is very small compared to running the entire FAST code. Do you know why this is so?
The problem is that FAST has to map the HydroDyn mesh to the SubDyn mesh. I guess this takes most of the time. However, a monopile foundation is not as bad as a jacket concerning the wave loads.
Another way to deal with your problem is tu use a mutli-core CPU and run multiple threads simultaneously…
I have compiled a Linux version of OpenFAST using gfortran and am running tests on the OC3 Hywind spar. I am experiencing extremely low sim/CPU times of roughly .4. I aim to run roughly 1700 tests for a periods of 1 hour each and so need to improve this sim/CPU time substantially.
On my windows system, these tests are running at about 2.5 sim/cpu. Has anyone attempted building OpenFAST with different GNU compiler versions on Linux? Does using a different (newer) compiler version make a considerable difference to the computation times?
Is there anything that an be done differently, i.e. different compilation flags, during compilation that can affect the computation speed?
I don’t have personal experience with gfortran, but from what I’ve heard, the Intel Fortran compiler (while not free) compiles much more optimized code, which will run many times faster than the same code compiled with gfortran. If you haven’t already, I would also compile in single precision rather than double.
Thanks for your helpful reply. I followed your advice and compiled using Intel 19.0.1 Parallel Studio Fortran Compiler. I also compiled in Single Precision. This doubles my computation speeds, but it is still quite slow at .8 sim/CPU time.
If you, or anyone else, would have any other pointers on improving computation speed they would be greatly appreciated!
As discussed above, reducing the number of analysis nodes in the various FAST modules and/or disabling features and/or structural degrees of freedom (DOFs) are ways to speed up the calculation. When you disable features and/or structural DOFs, you can likely increase the simulation time step(s), which will further increase the speed.