Blog
“It’s all about the Data” - Test is only a means to an end
A complex product needs a lot of testing to be conducted in
order to gather data to prove that it will behave as required
and expected. In a highly regulated environment such as
aviation, safety is paramount. The regulator(s) prescribe a
series of tests that need to be conducted in order to
demonstrate the acceptable levels of safety including failure
tests. These are to demonstrate that in the unlikely event of
a component failure, there will be no unacceptable threat to
safety of flight.
In the world of jet engines, no test is more “spectacular”
than a fan blade off test (FBO). An FBO test is conducted on
an engine running at max fan speed by weakening a blade and
blowing it off with explosive charges. In order to pass an FBO
certification test (as prescribed by regulation EASA CS-E 840)
the engine tested must not:
- Catch fire (i.e. no flammable fluid leaks allowed)
- Release high energy debris through the Engine's casing or result in a hazardous Failure of the Engine's casing (i.e. no un-contained high energy debris that could damage the aircraft)
- Generate loads greater than those ultimate loads for which the Engine’s mountings have been designed (i.e. the engine will not fall off due to mount failure)
- Lose the capability of being shut down (i.e. when commanded, the fuel must be able to be shut off).
In addition to the above, the airframe manufacturer will need
to know that the vibration loading (frequency and amplitude)
from continued rotation of the now out of balance rotors
(which will be “windmilling" due to forward flight speed) will
not damage the aircraft structure or affect the pilots ability
to control the aircraft.
A link to a video of the
Trent 900 FBO test
is here. This short clip is an excerpt from a Discovery TV
programme about the engineering of the Airbus A380. (if you
don’t want to watch the whole video, the event starts at
around 4:40). This test costs £Ms to conduct but the real
value comes from the data gathered during the test and in
particular knowing what happens in great detail during the few
revolutions immediately after the blade is released
By the time the manufacturer is willing to conduct this test,
it already needs to be as certain as possible of passing the
test. Failing a test this important can lead to loss of
confidence in the product and as such is a share price
sensitive event. Reaching that level of confidence requires a
lot of analysis, preliminary component and subsystem tests and
understanding of the material properties of certain components
used for the final test. In other words it is totally reliant
on capturing key data by analysis and test and ensuring the
modelling methodology and test data can be reconciled to prove
the test requirements are met. From a test perspective this
means fitting the test engine with a myriad of sensors and in
particular 100s of strain gauges and accelerometers that will
have their signals sampled at rates of up to 100kHz or so. In
addition, high speed movie images are captured to allow slow
motion visualisation and analysis of the sequence of events.
These different data sources can be evaluated in concert to
prove that all the requirements are met. Note that the data
capture rates are huge!
The data will take the form of common time stamped engineering
values recorded and derived from analogue sensing of physical
conditions (such as pressures, temperatures and strains)
alongside physical observations and measurements taken from
viewing and inspection of the condition of components before,
during and after test activity. This is the data that will
inform our knowledge of how the product functions, performs
and deteriorates and let us know whether or not we have a
suitable design.
It is crucial that there is a whole data “ecosystem” in order
to capture that data and ensure that data files are identified
with the right metadata. This is to ensure it can be mined and
used for whatever foreseen or unforeseen reason during the
event, immediately after the event and in the way distant
future if, for instance, it should be needed to help
understand some in service product anomaly. Good quality data
sets are associated with knowing the context and uncertainty
of the data that was obtained from conducting the tests. The
ecosystem must be able to capture it all.
By far the best description I have seen in simplified form of
a data ecosystems comes courtesy of NI. It is something that
they published in a white paper termed
“The analogue big data system”. The key diagram from that paper is reproduced below. The
differentiator in this description from others I have seen is
the data flow life states along the bottom of the chart.
Test and measurement engineers really focus on having the
right sensors (left hand box on the chart) and signal
conditioning hardware (middle box) to produce time stamped
engineering unit outputs that can then be used by the
analytical engineers to compare with and validate to the
analytical methods. This is where analogue to digital
conversation takes place. They also want to ensure real-time
monitoring of certain parameters and even use some of those as
part of control loops and on the fly (in motion) derivation of
key parameters (e.g. thrust from multiple force measurements).
Once they have assured the veracity of the data (i.e. that it
is an accurate representation of the physical attribute sensed
their job is done).
Typically sensors and conditioning hardware is quite
specialised and niche. It is not necessarily all from one
source as the diagram suggests. There are a large number of
SMEs that can provide some incredible sensing technology. On a
complex test it is vital that many different types of
measurements can be made and are time synchronised. The
conglomeration of this hardware along with data display
capability is the “real time” system. The data acquisition
system (DAS) element of the ecosystem constitutes the "real
time” system, a time stamp and the software that is used to
instruct what needs to be recorded when and how. It also
provides the context data that allows calibration and
conversion to engineering units. Finally it also provides the
“telecoms" solution (e.g. ethernet or canbus) to transmit the
data in raw or post processed condition into a confederated
(early life) file form to a data repository (at rest). By
using a good, open standard telecoms solution, the DAS should
then be hardware agnostic to allow adoption of new measurement
techniques and hardware easily. Complex DAS systems have
typically been monolithic in the past but are increasing more
modular and flexible enabled by technology like
Data Distribution Service (DDS)
protocols.
The software suite used should be capable of feeding
configuration and operating instruction file to the signal
conditioning hardware as well as enabling metadata input,
characteristic searching of at rest data files and data
visualisation. These solutions used to be very sector specific
but are getting more and more cross industry capable. A good
example of this is
WERUM Hypertest
or the NI suite including
TestStand,
VeriStand
and
DIAdem. In the case of the fan blade off tests, it is paramount
that the data sets from all the preliminary rig tests can be
related to that from the whole engine test.
Once the data is at rest it has crossed what is termed “the
edge”. That is it has moved from the measurement and
engineering world into the care of corporate IT. It is now
being managed as a digital asset, the same as any other piece
of corporate data. Here the important attributes are data
security, back up and access. By the time it gets there the
data needs to have everything “attached" (the context and
quality assurance) to allow the data to be used as and when
required - possibly decades in the future. An in service event
may need to be compared to original test data. Fleet data may
need to be analysed on an ongoing basis to identify trending
or outliers. The ability to get best value from the data
depends hugely on the foresight applied to the way the data is
packaged before it is archived.
In the end what remains of that critical test is a “pond” of
data in which you need to know where and how to fish for the
right parts. If you need any help evaluating your data
ecosystem requirements, particularly at a strategic level
please get in touch at
info@nforconsulting.com.
An off the wall but subject related book recommendation this
time. The way data is presented is key to how the end user can
understand what the data is “saying”. I really like a book
called
“Information is Beautiful” which is available here
(as well as elsewhere….). It really made me think about the
way I present data to allow the audience to understand the
message clearly. I hope you enjoyed this latest blog
instalment. The next will be “ A day isn’t wasted as long as
you’ve learned something new” - reaching your potential.