Brentwood, TN, August 28, 2013: A recent TV commercial on medical implants caught my attention. While touting the benefits of extensive laboratory testing, the fine print said that “…results of the testing have not been proven to predict clinical wear performance…” How true. Laboratory testing is rarely indicative of true wear and does not predict actual product reliability in the medical device industry.
Testing is a necessary and vital element in the development of emerging device designs. However, testing alone in a laboratory setting is not adequate in guaranteeing the reliability of a device. Things that perform brilliantly in laboratory testing have been a disaster once deployed. A critical issue in certifying device reliability is the fact that in-patient failures often derive from non-typical damage conditions. One cannot test for high reliability. A failure rate as low as 1 in a 1000 can cause the manufacture to recall a device. At these rates, failures are driven by tails of the statistical distributions of loads, geometry and material properties. One just cannot test enough samples to understand what is going to cause failure in the patient population. One can test for “worst case” or accelerated failure conditions but it is difficult to know if worst case is 1/100, 1/1000 or 1/10000 failure rate. So it is not possible to quantify device reliability. Developmental testing at a specimen or sub-component level is required. These tests are useful in identifying gross design flaws, and the results of these tests must be used to calibrate or validate the full scale design models in the context of the actual usage conditions along with identifying important quality control parameters, but they cannot be used to predict reliability.
The medical device industry may have some catching up to do with regard to using additional tools to improve reliability and reduce recalls. The improvement in reliability in other industries has been driven by the use of computational models as an additional tool to physical testing and quality control. Computational models with probabilistic methods have been used in aerospace, automotive, civil structures and other industry to predict reliability and identify the most probable sets of conditions that will produce unacceptable failure rates. Computer aided design (CAD), finite element analysis (FEA), computational fluid dynamics (CFD), and material and manufacturing specification are combined to create a model that is a digital representation of the device such as the “Virtual Twin®” used in VEXTEC’s Virtual Life Management® (VLM®). The input values to the model are statistical distributions with estimated uncertainties. Automotive engineers use these models to computationally “drive the fleet” where the variation in manufacturing, usage, maintenance and repair are simulated to predict the incidents of failure of each of thousands of components. If a supplier produces a lot of 200 parts that do not meet a material specification, the model is ready to be used to simulate the risk of failure if the parts are accepted and put into production long before tests can be completed. Or even worse, if the 200 parts slipped through quality control, the models are ready to simulate risk and determine if a recall is required.
VLM recognizes the critical role of the random nature of damage accumulation in a population of patients. It provides a better means for using and assessing the results obtained from relatively few laboratory/animal/human tests which, by themselves, are unable to characterize the randomness that is critical to population-wide damage tolerance and risk assessment. VLM provides a technique for assessing the scatter in the behavior of clinical damage rather than simply relying on purely statistical safety factors for all operations. These empirical scatter factors do not differentiate between the sources of scatter such as patient type, patient activity level, damage type and locations, material lots and production methods. The safety factors today rely solely on the acquisition of great amounts of empirical field data thereby combining all factors in a single, undifferentiated life factor. The empirical approach means that the minimum life prediction capability often follows a critical recall, rather than anticipating it.
There was a feature article in Wired Magazine last November on the issue of product failure entitled “Why Things Fail”. The article provided a discussion of recall, warranty and reliability in various industries and what engineering does to try to avoid failures including computational simulations. But warranty is not just an engineering problem. Poor reliability and recalls reverberate throughout a company and even industries as discussed in the article.
Although computational simulation is not as wide spread in the medical device industry, the FDA would like to move the community in that direction. The FDA has hosted meetings on computational modeling. At the last meeting, a featured speaker from NASA discussed how NASA requires probabilistic computational analysis as standard practice, this stemming from their very public failures. The FDA is also sponsoring the first annual conference on Frontiers in Medical Devices to focus on computational modeling (http://www.asmeconferences.org/FMD2013/).
The US Air Force, Navy, Army and NASA are taking this concept a step further in developing an airframe “Digital Twin”. This is a digital representation of an individual airframe (by tail number). This includes all of the engineering orders, repairs and missions that make each tail number unique. Uncertainty and errors associated with the manufacture, assembly, usage, record keeping and the computational models is all considered to “bound the uncertainty” on the health of the airframe. There could be a corollary to a future “Digital Patient”. The patents history, genetics, life style could used to create a model to simulate the risk of “failure” of a procedure or device.
Simulation-based design analysis is fundamentally about making decisions with uncertainty. The computational methods we advocate are for predicting reliability and managing uncertainty. VLM is a computational methodology that estimates the sensitivity of uncertainty in input variables and the sensitivity of modeling approximations to the final output. In the current age of large multidisciplinary virtual simulation, this is useful in determining how to optimize for the best use of computational and testing resources to arrive at most robust predictions of device reliability. As an example, with regards to implantable medical devices, one wants a high statistical confidence that the device is reliable before beginning patient trials. Too few samples are tested at a limited number of conditions to identify the subtle design issues that affect the reliability of the device once it is put into the market. This is understandable; one simply cannot test enough samples at enough conditions to cover all possibilities. It is also true that one cannot substitute modeling for testing, quality control or good engineering. However, computational models should be an addition tool in the engineer’s toolbox to drive up reliability and decrease the chance of a recall in the medical device industry.
Leave A Comment