output to the SPI driver but the SPI driver crashes, then obviously there is a problem. If you then decide to modify the SPI driver, you need to validate the entire software stack again. This can become very cumbersome, and the delays can compound and cause your schedule to slip In the case of an FPGA, there is still the concept of external IP (commonly called IP cores), and your use of this IP needs to be validated just like software IP. However, once you have validated all of your use cases, you can have confidence that it will behave as expected when integrated with other components. Let’s look at our FFT example again. If you used an FPGA, you would acquire or generate an FFT IP core and validate its numerical correctness for your use case—this is the same as with the software. However, the risk of intermittent failure decreases drastically because the middleware has been removed. There is no longer an RTOS, and the SPI driver is its own IP core whose operation does not directly affect the FFT. Furthermore, if you modify the SPI driver implementation, there is no need to revalidate the unaffected areas of the system. Figure 1 The FDA does not validate source code. Rather, they validate the process that you use to develop the code. This shifts the burden of safety to the maker of the device.
When developing embedded software, you almost never implement every line of code from scratch. Instead, various tools are available to make the firmware designer more productive; these range from simple drivers to network stacks to operating systems and even code generation tools. Though these systems are generally well tested individually, no real-world software is bug-free. With so many possible combinations of tools and libraries, the likelihood of your using components together in a novel way is relatively high. For this reason, the FDA mandates that for all off-the-shelf software used in medical devices, you need to validate that the software stack works for your specific use case. What does that mean? Well, say that you are using a signal processing library that contains a fixed-point fast Fourier transform (FFT), and you are detecting the presence of a certain frequency component. You do not need to validate that the FFT returns the correct answer for all possible inputs, but rather you need to validate that it returns what you expect for all valid inputs according to your specifications. For example, if you are detecting only human audible tones, there is no reason to test that the function returns correct values for inputs over 20 kHz. Unfortunately, as we learned here, software components that seem independent are not necessarily so. Therefore, if you are using that software stack with an SPI driver with a real-time operating system (RTOS), you need to validate all of these components together to have confidence in the FFT. If the FFT passes a valid
Most of us know about buffer overflow through cryptic hacker exploits and subsequent Microsoft patches, but this is also a common error when developing embedded devices. Buffer overflow occurs when a program tries to store data past the end of the memory that is allocated for that storage, and it ends up overwriting some adjacent data that it shouldn’t. This can be a really nasty bug to diagnose, since the memory that was overwritten could be accessed at any time in the future, and it may or may not cause obvious errors. One of the more common buffer overflows in embedded design is a result of high-speed communication of some sort—perhaps from a network, disk, or A/D converter. When these communications are interrupted for too long, their buffers can overflow, and these need to be accounted for to avoid crashes. This can be helped by an FPGA in two ways. In one example, the FPGA can be used to manage a circular or double buffered transfer, and it can offload that burden from a processor. In this case, the FPGA serves as a coprocessor that reduces the interrupt load on the processor. This is a common configuration, especially among high-speed A/D converters. In a second example, the FPGA can be used as a safety layer of protection where all of the patient-facing I/O is routed through the FPGA before it gets to the processor. In this case, you can add additional safety logic to the FPGA so that your outputs can be placed in a known and safe state in the event of a software crash on the processor. In this case, the FPGA serves as a watchdog, and correctly implemented logic ensures that the patient risk is lowered despite a software failure. With the architectural decision of placing an FPGA in the primary signal chain, these two methods can be combined to guard against buffer overflow and to better handle it if it does occur (Figure 2). In the end, we’re really discussing the differences between RTC MAGAZINE OCTOBER 2009