How data fusion will transform tomorrow’s operating rooms

1025
Imagine driving to an appointment at night without the benefit of streetlights, signs, people who can help, or even a windshield. All you have is two displays to the side of the steering wheel: One shows a street map; the other shows where you are. Wouldn’t things be a lot simpler if the images could be combined?

This is roughly the challenge confronted by cardiologists when performing what are known as “interventional” procedures, such as implantation of a stent or valve by means of a remotely controlled catheter. During such procedures a nearby monitor typically displays a high resolution pre-operative computed tomography (CT) image of the vascular anatomy while a separate X-ray fluoroscopy image produced in the interventional suite itself displays the real-time location of the catheter tip.

“Surgeons are skilled in putting these images together in their minds,” says Daphne Yu, who heads the Image Visualization Lab at Siemens Corporate Technology in Princeton, New Jersey. “But by using advanced visualization, we can put the pictures together for them.” The big picture, however, is much broader than that. Indeed, what Yu and her colleagues at Corporate Technology and at Siemens’ vast Healthcare Sector have in mind is nothing less than a vision of tomorrow’s operating and interventional environments in which all modalities are ergonomically integrated.

Such modalities include, for instance, live endoscopic images, ultrasound, real-time CT, fluoroscopy, electrophysiology (used in neutralizing cardiac tissues responsible for arrhythmias), and, above all, 3-D pre-operative CT or magnetic resonance (MR) image sets. The latter are particularly important because they can provide the navigational landscape into which all other modalities will eventually be integrated.

A Roadmap Takes Shape. With this vision of tomorrow’s integrated treatment environment in mind, researchers at Siemens Corporate Technology have developed learning-based software that can identify and segment (separate from its surroundings) any organ in any digital medical image, regardless of occlusions, angle of view, imaging modality, or pathology.

An example of this capability is a heart model segmentation software that automatically separates the heart from a 3-D CT or MR image set. When used in combination with live fluoroscopy, segmented heart models can be used, for instance, to locate the exact areas on the heart’s surface to be ablated in order to neutralize arrhythmia-causing tissues.

In addition, at the U.S. National Institutes of Health (NIH) in Bethesda, Maryland, live image-model fusion software developed by Siemens Corporate Technology in cooperation with Siemens Healthcare has been used experimentally to help guide an artificial valve to its target in a pig’s heart. “This fusion of heart models and live images provides the landmarks that help physicians identify exactly where a catheter is located in real time,” says Yu. “It is a promising example of the power of image fusion in the interventional suite and operating room.”

Working along similar lines, Razvan Ionasec, PhD, a specialist in machine learning applications for medical imaging at Siemens Imaging & Therapy Systems Division in Forcheim, Germany, is combining pre-operative 3-D CT images with 2-D X-ray video images generated in the operating room itself by a Siemens “C-arm” CT scanner. “What typically happens,” he explains “is that before an operation you have a lot of high-resolution equipment and time to produce images. But what you want is to make this pre-op information available in the operating room, where time is short and imaging power is limited. To bridge this gap, the pre-op information is mapped to the fluoroscopy data. As a result, all of a sudden, you have real time motion information — something you would never be able to get from fluoroscopy alone.”

The integration of modalities is already paying off. A technology for the interventional placement of aortic valves has recently been bolstered by the addition of pre-operative CT data. The resulting product, syngo.CT Valve PilotTM, not only automatically segments the aortic valve and related structures from a CT scan, but provides measurements, such as the radius of the valve, which are essential for planning and conducting an intervention.

Meanwhile, another technology, which is known as “eSieFusionTM imaging”, overlays live ultrasound images on previously-acquired 3-D CT and MR image sets. The technology, which is now available on Siemens’ ACUSON S3000TM ultrasound systems, is used to guide a biopsy needle to its target with enhanced confidence. Ultrasound will eventually also be integrated with CT and X-ray images to support the placement of aortic valves, says Ionasec.

Data Fusion Goes Mobile. In addition to the integration of multiple clinical modalities, researchers at Siemens Corporate Technology have their sights set on making such images available wherever they are needed in real time. “Rather than having a huge screen with separate views of the area to be treated,” says Yu, “we have come to the conclusion that it is more practical and comfortable to have a single, integrated image that is portable.” Such an image could be available on a stand-mounted tablet or could even appear in a head-mounted device. The latter would support the integration of visual and mental activity with hand-eye coordination and might even be used in an augmented reality context, thus allowing a surgeon to superimpose diagnostic information on his / her actual field of view.

To realize this vision, Siemens researchers are developing techniques to promote extremely fast visualization. For instance, a team led by Dr. Andreas Hutter at Siemens Corporate Technology is focusing on ways to tailor streaming and video compression to medical applications, while others are working with chip manufacturers to minimize the computing and power demands needed to process images. “These efforts are starting to pay off,” says Yu. “They have made it possible for us to stream real-time images to a tablet using standard Ethernet technology.”

The need for achieving a virtually imperceptible delay is clear. “If you are pushing a needle or a catheter through a patient’s anatomy you need to have instant feedback,” says Yu. “For instance, if you are doing a procedure in which angiography is involved, our scanner works super fast to produce each image and encode it. The images must then be streamed to the viewing device, decoded and rendered.” Naturally, processing demands are even higher as additional imaging modalities are added and fused. Nevertheless, whatever delay this adds will probably not be noticeable. With eSieFusion imaging, for instance, initial registration of CT and ultrasound images requires three seconds, after which any two images can be fused in real time.

Adding Expert Systems to the Picture. Nor will multimodality data fusion in tomorrow’s operating rooms and interventional suites be limited to images. “Our vision is that all the information that’s needed will be available when and where it is needed,” says Yu. “In addition to pre-op and real-time image fusion from multiple sources, we will have live patient monitoring, such as heart rate and blood pressure.” Further down the road, demographic data and expert systems built on thousands of similar cases could be brought to bear on individual procedures, thus opening the door to virtual consultation functions and the analysis of alternatives.

On-the-spot simulation functions might, for instance, provide advice as to the best spot to clip an aneurysm based on real-time computational fluid dynamics. Virtual angiography, individualized anesthesia and drug dose interactions — all could be simulated during a procedure and then tracked as administered to refine underlying algorithms.

Last but not least, data fusion can be expected to save money. “It will provide a method for automatically recording procedures,” says Yu; “this will support effective systems for reimbursement, and can be exploited by learning systems to further refine treatments.” For all its potential, multimodality data fusion will need to overcome many challenges. Software from different systems will need to become far more interoperable, standards for everything from image quality to transmission speed will need to be developed, and a virtually unlimited appetite for bandwidth will demand ever-increasing processing power and energy efficiency. “It is still early days for real-time data fusion,” says Yu; “but when you add up everything that is happening in this field, you see that we are in the process of creating an ecosystem that will transform the way we plan, perform, document, and learn from a vast range of treatments.”