What does the future hold for AI and 3D printing?
In most patient cases, a Computed Tomography (CT) or Magnetic Resonance Imaging (MRI) scan will provide a clinician with an acceptable level of insight to give them the confidence to define and proceed with a surgery plan. However, for over eight million complex procedures taking place each year, a 2D scan doesn’t always cut it for planning surgery and communicating your course of action to your patient.
So, what is the solution?
A 3D anatomical model removes unnecessary variability from one surgeon’s anatomical interpretation from another - standardising the approach in interpreting the patient’s anatomical detail. The models can be held in the surgeon’s hands and fully scrutinised, allowing them to define and simulate a surgical plan before they set foot in the operating theatre - reducing the risk to the patient.
The application of utilising 3D printing to create patient-specific models for pre-operative planning is still in its infancy. In fact, a Gartner study shows that only around three percent of hospitals and research institutions have 3D printing capabilities on site, with more hospitals adopting the technology each year. The recently installed 3D printing lab at Newcastle’s RVI is just one example, however, it’s clear there is work to be done in addressing this gap.
One of the reasons why the technology has not been more widely adopted, and often seen as being one of the largest bottlenecks in producing 3D printed anatomical models, is the availability of radiologists or biomedical engineers available to segment the 2D images. The segmentation process is the partitioning of an image into multiple labelled regions to locate objects and areas of interest in images. This can be an extremely time-consuming process and take clinicians away from treating patients for hours at a time.
Automation is essential
If 3D printing is to become a go-to pre-surgical routine in healthcare, then automation is essential. Producing a 3D printable model from 2D images currently takes anywhere between four to ten hours per printed model. Axial3D is reducing this by building and using machine learning algorithms, allowing us to deliver near-instantaneous results and removing the main bottlenecks associated with medical 3D printing.
We have developed an online ordering portal that allows surgeons to easily and quickly place orders to request a 3D printed model. The anonymised data is then given a unique identifier code and uploaded and managed on AWS Cloud, allowing us to deal with large volumes of medical images quickly and securely. All of this speeds up the process of producing and shipping the patient-specific 3D anatomical models to meet our delivery guarantee of 48 hours.
By applying machine learning to medical image segmentation, we have reduced our processing time to a few minutes. We are able to quickly deploy new models as they become available, facilitating rapid testing of new architectures and benchmarking performance of algorithms over time. By leveraging the power of AWS, we have the capacity to process thousands of images simultaneously while creating cost and efficiency savings.
The use of machine learning has enabled us to provide a super-quick ‘DICOM to model’ service to clinicians wherever they are in the world, 24 hours a day, seven days a week, 365 days a year. The effect this has on patient care is game-changing. No longer will radiologists have to spend hours segmenting images to make them 3D-printable. No longer will surgeons need to wait weeks on a 3D printed model being produced and shipped. No longer will hospitals be expected to pay upwards of thousands of dollars for a single 3D printed model.