Dr Gallagher, you are very well known to the academic and clinical world for your extensive research on VR. Tell us a little more about the latest paper you published together with Dr Cates and Prof. Lonn: Prospective, randomised and blinded comparison of proficiency-based progression full-physics virtual reality simulator training versus invasive vascular experience for learning carotid artery angiography by very experienced operators.
There is a relative lack of published studies evaluating the impact of VR simulation training on actual individual operator clinical performance on patients. Indeed, none of the prospective, randomised clinical validation studies that have evaluated simulation training have determined its utility for training highly experienced operators learning a new technique or new procedural skill, even in surgery. The aim of this study is to evaluate the utility of VR simulation training in comparison to one-to-one proctored/mentored in vivo training for highly experienced IC attempting to learn a new procedure, that is, carotid angiography (CA).
Tell us a little more about the design and methods you used?
We wanted to run a rigourous research, so we decided that the study should be prospective, blinded and randomized, that is the highest level of evidence base research. We selected twelve very experienced interventional cardiologists that were new to carotid artery angiography and then randomised them either to train on virtual reality (VR) simulation to a quantitatively defined level of proficiency or to a traditional supervised in vivo patient case training. We also did all efforts to avoid bias of age and experience.
Was there an introductory training to CA for the participants?
Yes, they all underwent a comprehensive online education training which includes instruction on the carotid anatomy and aortic arch types, the devices to be used for performance of the procedure and familiarization with the steps of the procedure. All participants were independently assessed already at this stage and demonstrated excellent didactic performance with passing scores (100%).
So, we can say that after this introductory training, all the 12 participants had an equal understanding of a Carotid Artery angiography? And what happened next?
At this point the participants were randomly allocated to the VR training versus conventional training. One group spent time on the Vascular Interventional Simulation Trainer (VIST) under guidance of an expert trainer. They were instructed to perform the procedure on the VIST till they reached a proficiency level based on metrics. The other group completed a supervised and mentored training during an elective in vivo case according to the traditional mentor-apprenticeship learning model that means that the participants in this group were mentored for one complete CA case by a cardiologist very experienced in CA and stenting procedures.
If I understand well, one group trains on the simulator, supervised by an experienced physician until they show proficiency, the other group performs a case on a patient under the strict guidance of an expert colleague. But you can clearly see that the first group can repeat the procedure over and over, while the second has only one opportunity to perform?
Yes, and this is the first obvious advantage of VR training. One can repeat a procedure until showing proficiency at it, at least on the simulator. The other group needs to acquire their skills and competence operating real patients, of course under supervision of their mentor and proctor.
And does that translate into proficiency when those VR trained physicians operate a patient?
That was exactly the purpose of our research. So, after that the two groups trained in the different methods. All the 12 participants completed a separate supervised but un-mentored complete CA case functioning as the primary operator, proctored by an experienced interventional cardiologist who was also an expert on CA, but who was blinded as to the training status of the subject. The proctor was instructed to behave the same towards the subject as they would during a normal case.
The operative performance was video-recorded for subsequent analysis by experienced operators described above, who were blinded as to the operator’s identity and training status. Video assessment was scored and analyzed for unambiguously defined metric errors, attending takeovers, procedure time and fluoroscopy time for both groups.
You mentioned “Metrics and Proficiency levels”. How did you reach a consensus over those?
Proficiency levels were established on experts’ mean performance on a specific CA VR case on the VIST simulator. This is a methodology firstly reported by Seymour et al. and validated using a large number of trainees learning carotid angiography. We agreed on a set of intraoperative errors that must be avoided, such as severe dragging of the tip of the catheter along the wall of a vessel for a distance >3 mm. A catheter movement error was defined and recorded when the catheter was advanced into the carotid artery without the guide-wire tip inside the catheter or if the catheter was too close to the lesion. The mean score used to define proficiency levels for the experts was fluoroscopy time ≤6.2 min and total technical error score ≤4.
Now the big question. How did the two groups perform?
I want to emphasize the fact that the experts who assessed the participants watching their video-recorded procedures were blind to their identity and training status. It means that they didn’t know whether they were watching someone who had been trained on VR or by an in vivo proctored case.
The mean amounts of time to perform the carotid artery angiography (CA), fluoroscopy time and operator intraoperative errors were measured and the differences between the groups were compared for significance. Our results show that experienced interventional cardiologists trained on the VR simulator performed significantly better than their equally experienced controls showing a significantly lower rate of objectively assessed intraoperative errors in CA.
Overall, the VR trained cardiologists performed the procedure 17% faster, used 21% less fluoroscopy and made 49% fewer intraoperative errors than standard trained colleagues
It sounds like a pretty important piece of evidence to support an alternative way of training operators?
This is the first prospective, randomized and blinded clinical study to report that VR simulation training transfers improved procedural skills to clinical performance on live patients for experienced interventionists. This study, for the first time, demonstrates that VR simulation offers a powerful, safe and effective platform for training interventional skills for highly experienced interventionists with the greatest impact on procedural error reduction.
Do you expect changes occurring in the way doctors will be trained in the near future?
Training in medicine is currently going through a paradigm shift. We hear from different parties, including very recently, the Department of Health in the UK, that a procedure should not be performed on a patient, the first time that it is performed. Why shouldn’t we use technology like the simulation-based training for learning purposes? Virtual reality (VR) simulation as an approach to training skills has been validated and it is particularly suitable in those disciplines, like interventional cardiology, where the the rate of change and evolution of cardiovascular devices is huge and the morbidity associated with some new procedures like TAVI or carotid stenting, even by experienced operators remains high.
How do you imagine VR being used on a routine base?
A significant part of the new procedural training challenge will be helping experienced physicians acquire the appropriate skills to perform new procedures or learn to use new devices safely without putting patients at risk.
Simulation training must be more than just simulated experience supplanting repeated in vivo practice. Quality VR simulation training affords the trainee with the opportunity to engage in deliberate practice while making mistakes and giving immediate ‘proximate’ feedback when the mistake was made.
Simulation should be defined as an artificially created or configured learning situation that allows for the practice or rehearsal of all or salient aspects of a procedure including the opportunity to enact both appropriate and inappropriate learner actions (ie, errors). The simulation should also afford the opportunity to perform the procedure in the same order and with the same devices with which the procedure would normally be performed.
That said, simulation-based training will never completely replace the in vivo clinical training experience. Rather, the function of simulation-based training (with the highest fidelity that is reasonably achievable) is to supplant the early part of the learning curve.
The implications of the study reported here are considerable for new procedural skill acquisition as well as for maintenance of skills and competency assessment in procedure-based medicine disciplines.