The use of medical data to enhance Virtual Reality medical simulators

If you are a medical student or a professional doctor you would probably be interested in improving your professional skills every day. I am not talking about practising on a patient or reading books about the human anatomy. It is all good. Still there are other techniques out there.

Imagination is a very powerful technique. I am not a doctor nor a medical student. Instead, I am an innovator, so I will focus on unlocking the edge possibilities of advanced technology. In a very near future you will be able to practise differently and prepare for a surgery anytime and anywhere.

It is obvious that the speed of computing power is increasing daily. Many say the growth is exponential. In CES 2018 Intel released their new 8th generation Core processor and it is many times faster than the previous generation. Even more exciting is that Intel has announced a 49-qubit quantum chip, CEO Brian Krzanich calling it a major breakthrough in quantum computing and the next step to "quantum supremacy".

Most of you have heard about medical imaging techniques like X-ray, MRI and CT. That's right, these are the most popular ones. Your physician might have asked you to take a scan of your head or arm to see if you have any injury or tumor. If the result is positive, the doctor uses it for planning and decision making. Without medical imaging, it would be like searching a needle in a haystack.

The tech startup community is so huge and the competition enormous that it's almost impossible to know what everyone is doing. Innovation is flourishing. I have seen some great examples which have been developed on Hololens, like this one, a tool where you can interact with the representation of a human in 3D and by slicing it or moving over the patient's body you can learn and prepare. It is developed by this amazing company called VSI (Virtual Surgery Intelligence). Or this virtual patient developed by MediSim. Recently, when visiting the Europe's biggest tech conference Slush I was very lucky to meet an amazing company called Holoeyes. These guys are from Japan and had made an incredible medical imaging segmentation algorithm and unique color labeling system. The anatomy structures in 3D were inserted into the Hololens. Cool stuff. It is still inaccurate, although it represents the possibilities.

SemanticMD is another very cool company. What they do. They train AI to make sense of the medical imaging data. For example, if you want to teach a computer to find a tumor in a brain scan, you can use the tool to annotate and make training data. Sounds great. Then the training data can be used in decision making or labeling new data. So simple. Now you have a robot radiologist who finds the tumor in a brain scan. Of course, you might say it won't be very precise and humans can do it better. Not true. In the next section I will explain why.

If you are interested in deep learning for medical image processing, its challenges and future, I recommend reading this recently published article written by researchers Muhammad Imran Razzak, Saeeda Naz and Ahmad Zaib. They are proposing the following:

“Deep learning technology applied to medical imaging may become the most disruptive technology radiology has seen since the advent of digital imaging. Most researchers believe that within next 15 years, deep learning based applications will take over human and not only most of the diagnosis will be performed by intelligent machines but will also help to predict disease, prescribe medicine and guide in treatment.”

Another article written by researchers, radiologists and engineers in New York University School of Medicine explains that the 3D CNN algorithms are better than the traditional 2D. No surprise. The precision is all about the quality and amount of the training data.

So how do you make a surgical simulator in virtual reality where you can see the patient’s anatomy in 3D and it is so precise you can trust? Here is my recipe.

  1. Biomedical data conversion into a point cloud
  2. Point cloud segmentation into a mesh
  3. Structure recognition using Deep Learning
  4. Mesh morphing by using ideal mesh template, developed by Anatomy Next
  5. Independent structures would be labeled and interconnected via FMA, developed by University of Washington
  6. Adding the Secret ingredient
  7. Transformed, morphed, labeled mesh can be placed into a virtual reality simulator

Several years ago processing the steps above would have taken months. Now, I argue, it would take just several minutes or maybe even seconds. If so, you could get the training model from a real patient instantly and start practising your skills.

In a very near future, you will see medical students run a simulation using virtual reality headset and running any surgical case you can imagine. It will look so real that you won't see a difference between a real patient and the virtual one. The software will talk to you and it will give you daily tasks like a personal coach. However, the human factor will stay very important. I am not suggesting to exclude humans. I'm suggesting to make something new and interesting.


Leave a comment

Please note, comments must be approved before they are published