Hello, I downloaded CRUNCH (I don’t have necessary toolboxes for MCRUNCH) and ran CertTest and got differences in test03b.pek (original file has more data). I would upload the files but the forum does not like *.out or *.pek. Would you please let me know if I should remain concerned about these differences.
Were the differences in the last decimal place? If so, I wouldn’t worry about it.
Did you recompile the code? If so, I would expect to see small differences because calculations may take place in a different order. That is, is a=bc/d done as a=(bc)/d or a=b*(c/d)?
Is there some reason you are not using the compiled version of MCrunch? I spent $5000 on a compiler just for you.
You can email me files if you like.
I did not recompile Crunch. The difference is that my run is missing some data points. I will email you the file.
Sorry that I am not using the compiled version of MCrunch, I was not aware (or could not remember) that you made it. I will use it on my next project.
Thanks for sending the file. Yes, it looks as if your CertTest run generated fewer peaks than I got on my PC when I built the archive. I downloaded the Crunch archive off the web server, exploded the archive in a temporary location, and reran CertTest and it did not detect any of the problems you saw.
I’m not using the same PC as I did years ago. My current PC is a 64-bit Windows 7 box, while the old one was likely a 32-bit XP box.
As for whether or not to be worried about this, it does concern me that you are getting different results than I am. Unfortunately, I have no idea how to resolve the issue, as I cannot reproduce it here. You are also the first person to report the problem and the program was released three years ago, so it sounds as if your PC doesn’t like you very much. Have you hugged it recently? Are you running Windows on a Mac or something?
I’m kind of at a loss…
It’s a Gateway (?!?) Windows XP. I’m not doing any peak finding with my current analysis, just regular statistics.
You might find that GenStats (http://wind.nrel.gov/designcodes/postprocessors/genstats/) is a more-efficient tool if all you want is statistics. It holds only one file in memory at a time. It also has a much smaller input file.
Yes but I’m Crunching 60 files at once.
I’m not sure what you mean by this.
GenStats can process a virtually unlimited number of files in a single job. Far more than Crunch can, in that Crunch holds all the data for all the files in memory. It must have enough RAM to hold data for all the files concurrently. GenStats reads in files one at a time and adds their statistics to accumulators. It needs only enough memory to store the largest file. After processing all the files, it can output statistics for each individual file in addition to the aggregate statistics.
Unlike Crunch, GenStats doesn’t require that all files be the same length. They must all have the same channels of data, but it would not be very useful to process different types of files.
For basic statistics, GenStats is more versatile and more efficient that any of our other codes. It can do everything Crunch can do statistics-wise and more.
Thank you for the information. I was confusing GenStats with GPP…I will check out GenStats.
Does Crunch still can’t process files with different lengths in an aggregate analysis? What about MCrunch, MLife and MExtremes. Are these tools able to do this?
Thank you for your support.
#Edit: A test with Crunch and two input-files with different lengths showed no error message.
I’m not an expert on these post-processors, but the Crunch, MCrunch, and MLife documentation do not mention that an aggregate analysis is limited to files of a fixed length (although for aggregate rainflow cycle counting to work, all files must have the same time step). And MExtremes does not process aggregate files.
Just FYI: Marshall Buhl is now retired from NREL.
Thank you very much for your answer.