These posting were split off the topic
- How do each analytical technique compare with the full scale tests?
How do your results for CRASH3 compare with the results reported in the literature?
- To be sure your MATLAB implementation is performing properly you should probably try to duplicate the reported results in the literature. For a presentation and discussions of the original CRASH reconstructions of the RICSAC tests, see the Jones report:
"Research Input for Computer Simulation of Automobile Collisions – Volume IV - Staged Collision Reconstructions", Jones, I.S., Baum, A.S., Calspan Report ZQ-6057-V-6, Contract DOT-HS-7-01511, December 1978
Some differences in correlation may be due to the Monk & Guenther “Update of CRASH2 Computer Model Damage Tables”, DOT-HS-806446, March 1983 (note: file is 5 megs)
- “(4) The test data upon which the CRASH3 empirical fits of the Monk & Guenther report Update of CRASH2 Computer Model Damage Tables are based should be carefully re-examined. In the development of those fits, it has been assumed that common crush properties exist within each size category of vehicle, regardless of differences in the basic layouts of components and in overhang dimensions. The total numbers of included vehicles are limited, and substantial adjustments have been made in the results. A fresh look, with the CRASH4 data needs in mind, may define more proper categories on the basis of stiffness and restitution. It may also eliminate any need for adjustment of the results”.
And one would expect that using custom fitted coefficients for the individual vehicles should also help the CRASH results to better correlate with the full scale tests.
One last important note: A couple of things to be sure to include in any comparison of analytical techniques:
- How each individual analytical techniques compares with reality (as in full scale tests results)
A comparison of the input requirements for each technique: Are all directly measurable? Or are some subjective?
If something isn't measurable, how does an analyst objectively determine the input? Many analytical technique comparisons contain bias when the ‘answers are known’ during the comparisons. Subjective inputs can be arbitrarily adjusted for better correlation. The true test of an analytical technique is applying the technique blindly as is the case when used for actual applications in the field. What are the guidelines for a user to estimate the subjective inputs? And what is the range of possible variation of inputs?