I am not thinking about the gap between the results of simulation and test. We all know simulation is consistently improving and becoming more reliable, and is now a vital tool in vehicle development. The gap that is growing is in the detail. Modern high-performance computing means we are running simulations with more and more detail, many hundreds of thousands of elements. Yet the world of physical test engineering is not keeping up. Test engineers are constantly looking for the minimum possible detail. ‘Can I take another channel out of this setup? Can I test one vehicle once instead of two twice?’
Obviously cost is a huge driver in this; reducing physical tests can have an immediate effect on development costs. Is it a real saving though? When we reduce a physical test to its bare minimum, are we missing the opportunity to remove future tests from the program? How often are test engineers repeating whole tests because they didn’t quite get all the detail the first time round? The buzz words in many industries are big data, data mining and analytics; pulling together large amounts of information from many sources, and slicing and dicing in numerous ways to extract information and grow understanding. This approach is not taken in the automotive test engineering world, where small data is still the modus operandi.
My background is in NVH and I constantly see, for example, engineers conducting a test with 40 or 50 channels to understand global modes of a BIW. They then book time on the vehicle again and do another 10 or 20 channel measurements to understand a problem associated with powertrain mounting and the front sub-frame. Then another test for road noise on the rear. Then another, and another.
Modern test acquisition systems mean it is not always cost-prohibitive to use high channel counts, and there are even new tools available giving high levels of test data detail in vastly reduced timescales.
When an asset is available for test early in the program, shouldn’t we be looking to ensure all tests are done in one sitting? Then when the issues come along later in the build, there isn’t a need to reschedule testing. The information about the front sub-frame is there, ready to be analyzed. This is the approach already taken in other industries, where test assets are such high value that all testing has to be completed in one window of opportunity; second chances are not an option.
Furthermore, if the NVH engineers take highly detailed measurements when they have access to the vehicle, this could provide useful information to chassis engineers, and vice versa. If test data is pooled into big data from all disciplines, then the sum of the parts becomes more useful to the whole. Back to NVH engineers: we scatter accelerometers over vehicle bodies and powertrains to engineer and develop the vehicle. Yet at the same time, other teams such as brakes and chassis use accelerometers for tests all potentially taking the same data multiple times.
Why limit the detail in test data and operate in discipline silos, and as a result increase the physical testing that takes place? A big data collaborative approach that gives as much information as possible early in the program, which can be interrogated when issues occur, will ultimately have the cost and time reduction benefits we are all looking for.
Currently the general manager of the Advanced Structural Dynamics Evaluation Collaborative Research Centre, an autonomous business unit within the University of Leicester, Tim Stubbs has over 15 years’ experience in acoustics and vibration testing at companies including MIRA, Jaguar Land Rover, LMS and Alstom Transport. He has been involved in many projects covering rail, automotive and aerospace applications, as well as providing training in the field.
September 7, 2016