WEALTHTECH & TRADING: PROGRAMMING Today’s traders need smart options if they are to react to the market in real time. We asked three low-latency experts from Cisco how two technologies, used in combination, can help The network interface card, aka NIC, has been around for a while; but it’s the technology’s more intelligent ‘brother’, SmartNIC who has been turning heads more recently. This capable beefcake goes by several aliases – probably the dullest acronym is data processing unit (DPU) – but, at heart, he’s a networking adapter card with a programmable processor. Many say SmartNIC’s full enterprise potential hasn’t yet been realised, partly because of the investment needed. But in environments where cost, time and adaptability are all finely balanced, as in the case of financial services and, particularly, in high-frequency trading
(HFT), the numbers soon start to add up. When a SmartNIC carries an integrated circuit that can be programmed in the field of operation – a field programmable gate array (FPGA) – it allows all manner of computational add-ons to be accessed by a customer, using open-source tools. Combined with the processing power of a SmartNIC – which can offload work from a central processing unit (CPU) and shift network packages 40-times faster than traditional high-performance NICs – this double act can change the nature of trading and literally buy time. It’s been said that the return on investment (ROI) can be measured in fractions of a second. So, we asked three ultra-low latency experts at Cisco, one of the companies leading developments in this area for financial services, to help us get better acquainted with SmartNIC and FPGAs. Dan Brown is a technical solutions architect, responsible for ‘anything that’s related to the nanosecond or even lower’, who works alongside fellow ultra-low latency specialist Mike Skory, and Bob DiPietro, an ultra-low latency technical solutions architect with particular experience in toolchain development for finance markets. We began by asking them to give us a history lesson. THE FINTECH MAGAZINE: Before programmable NICs, what did the industry have to settle for? And when did things really start to speed up and why?
DAN BROWN: The latency race started in 2007. Ever since then, people have been building technology infrastructure to basically reduce the length of time it takes to trade on exchanges. BOB DIPIETRO: There’s been a whole evolution. It was originally based on NICs, with everything going back to a processor via the kernel stack, and all logic based in the host. Then, software-based stacks and kernel bypass arrived. Next, with FPGAs, some or most of that work that was done in the host could be offloaded and the latency reduced. You can now program in the FPGA and avoid the whole chain up to the host and back. For more complex problems, you can also use the host in conjunction with the FPGA. For example, you can create a hybrid stack, where you don’t have to build an entire transmission control protocol (TCP) engine inside the FPGA and yet still be able send TCP in the FPGA for ultra-low latency. This avoids having the limited functionality, performance, and resource constraints that come with putting a TCP engine in a FPGA. DB: At Cisco, we have a few different categories of NICs – the X25 and X100, which we’d classify as our drop-in NICs; they can be used to accelerate communications to the network. You can apply our kernel-bypass stack to that and get into the sub-microsecond range easily, for near-enough any application. The primary use case for those NICs has been in financial services for trading, but they do have applications outside of that.
The NIC of time ffnews.com
Issue 23 | TheFintechMagazine
19