When I reached the step 5c and type the command in cmd (python manualRegressionTest.py …\build\bin\openfast_x64_Double.exe Windows Intel 1e-5), I got the following prompt and I cannot execute the regression test.
The regression testing system was recently changed so that there is no dependency on operating system and compiler type. The error message that you’ve copied from the script has the correct suggested syntax:
I have tried the revised script and got the following result that most of the test were failed. May I know if that’s related to the location of the OpenFAST_x64.exe, or the baseline documents?
The regression tests are run with a Python script, manualRegressionTest.py, like you’ve already seen. In the output you’ve posted, there’s a Python error at the bottom that should give you an indication of how to proceed. You can reference the continuous integration script for how to install the dependencies, see here.
Thanls for the instruction. I have fixed the dependencies problem. Although I can execute the script, I still got all test cases failed. Please refer to the log below:
I wonder if that’s related to the OpenFAST_x64.exe that I built with MS Visual Studio Professional 2019.
It could be. You and I will need more information in order to find the problem.
A good start is to simplify the problem by running only a single test case. You can find the syntax for this by passing the -h to the regression test script: python manualRegressionTest.py /h.
After you’ve constructed the command to run a single case, then turn on verbose output. The flag for this is also described in the help prompt.
Please update here with the commands you’ve used and the results.
I have just figured out the reason for the failed tests. It should be related to the path of the Openfast_x64_double.exe. I have changed the path and executed again and got the following result:
Thanks for following up, and it’s great to see you’ve made progress. Just so you know, there’s an option to generate html-based plots via the -p flag. I recommend using that to evaluate your results.
For the failing 5MW cases, I think the next step is to inspect the output from the cases themselves. Find the directory containing the input files and that’s where the output files will be. Specifically, look at the .log and .out files. Let me know if you find anything of interest there.
I have checked the .log document of the test. It said that the ‘ElastoDyn input file not found or improperly formatted’. Please refer to the complete log below for your reference:
**************************************************************************************************
OpenFAST
Copyright (C) 2021 National Renewable Energy Laboratory
Copyright (C) 2021 Envision Energy USA LTD
This program is licensed under Apache License Version 2.0 and comes with ABSOLUTELY NO WARRANTY.
See the "LICENSE" file distributed with this software for details.
**************************************************************************************************
OpenFAST-v2.5.0-10-gb4fd3acc
Compile Info:
- Compiler: GCC version 10.2.0
- Architecture: 64 bit
- Precision: double
- Date: Jan 14 2021
- Time: 17:19:34
Execution Info:
- Date: 01/14/2021
- Time: 17:42:54-0600
OpenFAST input file heading:
MonopileOnly
Running ElastoDyn.
Nodal outputs section of ElastoDyn input file not found or improperly formatted.
Running ExtPtfm_MCKF.
Time: 0 of 25 seconds.
Time: 1 of 25 seconds. Estimated final completion at 17:43:22.
Time: 2 of 25 seconds. Estimated final completion at 17:43:21.
Time: 3 of 25 seconds. Estimated final completion at 17:43:21.
Time: 4 of 25 seconds. Estimated final completion at 17:43:22.
Time: 5 of 25 seconds. Estimated final completion at 17:43:20.
Time: 6 of 25 seconds. Estimated final completion at 17:43:21.
Time: 7 of 25 seconds. Estimated final completion at 17:43:22.
Time: 8 of 25 seconds. Estimated final completion at 17:43:22.
Time: 9 of 25 seconds. Estimated final completion at 17:43:22.
Time: 10 of 25 seconds. Estimated final completion at 17:43:21.
Time: 11 of 25 seconds. Estimated final completion at 17:43:22.
Time: 12 of 25 seconds. Estimated final completion at 17:43:21.
Time: 13 of 25 seconds. Estimated final completion at 17:43:22.
Time: 14 of 25 seconds. Estimated final completion at 17:43:21.
Time: 15 of 25 seconds. Estimated final completion at 17:43:21.
Time: 16 of 25 seconds. Estimated final completion at 17:43:22.
Time: 17 of 25 seconds. Estimated final completion at 17:43:22.
Time: 18 of 25 seconds. Estimated final completion at 17:43:22.
Time: 19 of 25 seconds. Estimated final completion at 17:43:22.
Time: 20 of 25 seconds. Estimated final completion at 17:43:22.
Time: 21 of 25 seconds. Estimated final completion at 17:43:22.
Time: 22 of 25 seconds. Estimated final completion at 17:43:22.
Time: 23 of 25 seconds. Estimated final completion at 17:43:22.
Time: 24 of 25 seconds. Estimated final completion at 17:43:22.
Time: 25 of 25 seconds. Estimated final completion at 17:43:22.
Total Real Time: 27.702 seconds
Total CPU Time: 27.453 seconds
Simulation CPU Time: 27.375 seconds
Simulated Time: 25 seconds
Time Ratio (Sim/CPU): 0.91324
OpenFAST terminated normally.
Given that the ElastoDyn input file was probably missing, is that related to my installation issue?
Yes, you can always ignore warnings about “nodal output”. These simply warn you that you that the nodal output section is not included in your input file(s) (but this section is optional anyway).
I’m not sure I understand your question about “test failed”.
@Jacky.Cheung The regression tests run the suite of test cases and check that the results haven’t changed - that’s the “regression” in “regression test”. So in your case, the “COMPLETE with code 0” part indicates that the simulation completed successfully. After that, the results comparison happens and it looks like that failed. I mentioned generating the plots earlier, were you able to do that?
Also, since you’ve already run the test case, you can do the post processing (i.e. visualization of results) without rerunning the simulation by including the -n flag in the Python script.
With -p, the script creates a .html file for each case that completes successfully. So, in the same directory as the .fst file, look for a .html file with the same name as the test case itself. You can open the html file with any web browser.
I have added the -p or -plot to the command, but there was no .html file generated under the test case directory (the screenshot above was the directory (…\openfast-3.4.0\reg_tests\r-test\glue-codes\openfast\5MW_OC4Jckt_ExtPtfm). May I know if there was anything wrong with the command?
Apart from the -p issue, I have run the test case again with a higher tolerance number (i.e. 1e-5 and 0.5e-5), and the test passed. Can you advise the reasonable tolerance level for the test?