Why does this matter?
Take radical prostatectomy, a common cancer surgery performed for decades. One of its most significant quality-of-life outcomes, recovery of sexual function, remains unpredictable. Roughly half of patients recover function within a year; half do not. For years, surgeons could not reliably explain why.
Using AI models trained on gesture-level surgical data, Hung’s team can now predict recovery with approximately 85% accuracy. But prediction, he emphasized, is only the beginning.
The real breakthrough lies in identifying clusters of gestures associated with good outcomes. Not one “fatal flaw,” but patterns; subtle combinations of movements that collectively increase or decrease the likelihood of recovery. In doing so, AI begins to illuminate what decades of clinical experience could not fully explain.
Augmented Reality: Giving Surgeons “X-Ray Vision”
Hung then shifted from post-operative analysis to real-time guidance. Before surgery, most patients undergo CT or MRI imaging. Yet once inside the body, surgeons must rely largely on surface anatomy and memory. Tumors, blood vessels, and critical structures are not always visible in the operative field.
Collaborators in Bordeaux have trained AI systems on tens of thousands of surgical images, teaching models to recognize instruments, anatomy, tumors, and vascular structures. The result is a digitally reconstructed “twin” of the patient’s anatomy.
In the operating room, this digital model can be overlaid onto the live surgical view, aligning preoperative imaging with real-time movement. The surgeon sees not just what is visible, but what lies beneath, effectively bringing augmented reality into robotic surgery.
For decades, matching imaging to live surgery in real time was technically elusive. AI now makes this integration possible, enabling more precise dissection and potentially safer outcomes.
Autonomous Surgery: Closer Than We Thought
Perhaps the most inspiring part of Hung’s presentation came from work in fully autonomous surgical systems.
Unlike autonomous cars, which learn through trial-and-error reinforcement learning, these surgical robots use imitation learning. They observe expert surgeons, studying hand trajectories, instrument positioning, and workflow patterns, and then replicate those behaviors.
After being shown hundreds of examples, robots can now tie surgical knots autonomously with high reliability. In published research in Science Robotics, fully autonomous robotic systems have even performed complete gallbladder removals, replicating human surgical steps with remarkable precision.
While these procedures remain controlled experimental demonstrations, they challenge long-held assumptions about the limits of automation in surgery.
Augmentation, Not Replacement
Despite these advances, Hung’s message was not one of replacement, but of partnership.
AI, he argued, is here to enhance robotic surgery, to assess quality at a level of granularity humans cannot achieve alone, to provide augmented reality guidance, and to automate repetitive or technically demanding tasks. But clinical judgment, deciding who should be operated on, when, and how, remains fundamentally human.
In Hung’s framing, the future operating room is not surgeon versus machine. It is surgeon plus machine. As healthcare systems worldwide grapple with workforce constraints, rising complexity, and growing expectations for precision, surgical AI represents both a technical and philosophical shift. Data is no longer retrospective; it becomes actionable in the moment. Experience is no longer solely tacit; it becomes codified, measurable, and teachable.
The frontier is no longer whether AI belongs in the operating room.
It is how far, and how thoughtfully, we allow it to go.