A product speaks to engineers during testing much more than it does during mathematical analysis and simulations. In testing, engineers see expected, defined measurements and quantifications, as well as unexpected information that is useful in a broader understanding of a product’s performance.
Testing replicates real-world loading, mechanical actions, influences of heat and wear, corrosion, chemical interactions, metallurgical influences, and a host of other variables at the same time. Once testing is complete, it is possible to examine the physical specimen in detail, take it apart and discover additional information buried inside the device. We look at the expected/planned effects of the testing and keep our minds and eyes open for any unexpected/unanticipated details that provide a more comprehensive understanding of the product’s performance and issues.
Mathematical analysis and simulations, while extremely powerful, are inherently limited by the assumptions made and the capability of the analysis to deliver narrowly focused results that need to be interpreted. Simulations don’t tell you what you did not ask; testing does.
A product can be conceived, designed, built and tested/developed all the way to a successful market introduction with minimal understanding of the physics and analytical analysis. Witness the practical development of ships, airplanes and road vehicles in their early years. All were achieved with some basic design analysis (crunching numbers by hand) serving as a support mechanism to the design, test and development activity. These advances in transportation were more dependent on continuous development and testing than any comprehensive analysis.
In truth, these days design analysis should lead design in answering the question “How big should it be?”, as opposed to following the design and answering the question “Is it big enough?”. Analysis that follows design guarantees at least two design iterations: one before the analytical activity; and design revisions following the analysis.
Good, solid, valid design analysis is absolutely critical to designing a product that has a likelihood of performing the basic functions as expected. However, design analysis or simulation cannot assure engineers of ultimate success regardless of management and analytical visionaries promising that simulation can/will/should replace testing because it is more cost-effective and shortens program timing. There are too many simplifying assumptions made in analysis, regardless of the complexity of the model, to be able to say that the analysis is a full validation of performance.
A program that is true to success must have an iterative process that includes feedback steps and improves loops within the conceive, analyze, design, build, test, launch sequence. Testing simply must be an integral step of the project because managing testing out of a project will ultimately prove to be a project’s downfall.
Many times I’ve observed projects that have focused on the engineering and build phases with a minimum amount of testing with the expectation that the product is finished and ready for the customer, only to crash and burn upon delivery. I’ve see it at Formula SAE, where I have been a design judge for many years; in OEM vehicle projects; in defense projects; and in race vehicle projects. They have all suffered from the hubris of “It’s built, it’s finished, time to deliver/race, no time for testing, that’s why we hire such great engineers”, only to subsequently discover simple fundamental issues that were not discovered in analysis and simulation, but would have been discovered in testing.
The mechanical engineering education system is geared toward teaching students analytical mathematics, physics, chemistry, thermodynamics, dynamics, electronics, computer science and a host of other important knowledge. This same education system also needs to provide a good solid training in the realm of testing and data analysis including test planning and the performance of measurements.
But you don’t know what you don’t know. The less secure and experienced our knowledge base, or the more a design deviates from previous practice, the more testing is needed to ensure nothing will be missed. Analytics and testing are both keys to the build-up of knowledge, leading to significant performance improvements and refinements of a product.
When pursuing success, comprehensive testing and experimentation can minimize the need for analytics. Extensive analytics and simulations alone cannot eliminate testing. So let us use both wisely. Let’s perform analysis and simulation to understand, predict and refine a product, and let’s test to be sure that our analysis is correct and that we did not miss something important. Let the product speak.