This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMRB.2021.3110676, IEEE Transactions on Medical Robotics and Bionics TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 1 Modelling of Surgical Procedures using Statecharts for Semi-Autonomous Robotic Surgery Fabio Falezza 1 , Nicola Piccinelli 1 , Giacomo De Rossi 1 , Andrea Roberti 1 , Gernot Kronreif 2 , Francesco Setti 1 , Paolo Fiorini 1 and Riccardo Muradore 1 Abstract—In this paper we propose a new methodology to model surgical procedures that is specifically tailored to semi- autonomous robotic surgery. We propose to use a restricted version of statecharts to merge the bottom-up approach, based on data-driven techniques (e.g. machine learning), with the top-down approach based on knowledge representation techniques. We con- sider medical knowledge about the procedure and sensing of the environment in two concurrent regions of the statecharts to facil- itate re-usability and adaptability of the modules. Our approach allows producing a well defined procedural model exploiting the hierarchy capability of the statecharts, while machine learning modules act as soft sensors to trigger state transitions. Integrating data driven and prior knowledge techniques provides a robust, modular, flexible and re-configurable methodology to define a surgical procedure which is comprehensible by both humans and machines. We validate our approach on the three surgical phases of a Robot-Assisted Radical Prostatectomy (RARP) that directly involve the assistant surgeon: bladder mobilization, bladder neck transection, and vesicourethral anastomosis, all performed on synthetic manikins. Index Terms—surgical robotics, statecharts, supervisory con- troller, autonomous robotics I. I NTRODUCTION The research interest in Robotic-assisted Minimally Invasive Surgery (R-MIS) is shifting from teleoperated devices to the development of autonomous support systems for the execution of repetitive surgical steps, such as suturing, ablation and microscopic image scanning. The higher level of autonomy can potentially further improve the quality of an intervention in terms of patient’s safety and recovery time [1]. Moreover, it can optimize the use of operating rooms, reducing the surgeon’s workload and therefore hospital costs. In general, autonomy requires systems with advanced capabilities in per- ception, reasoning, decision making [2], motion planning [3] and interaction with the physical environment. Nonetheless, for autonomous or semi-autonomous systems Human Robot Interaction (HRI) plays a key role in providing both safety of execution and a successful knowledge transfer between users This project has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme under grant agreement No. 742671 (ARS) and from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 779813 (SARAS). 1 Fabio Falezza, Nicola Piccinelli, Andrea Roberti, Giacomo De Rossi, Francesco Setti, Paolo Fiorini and Riccardo Muradore are with the Department of Computer Science, University of Verona, Strada le Grazie 15, 37134 - Verona, Italy, name.surname@univr.it, 2 Gernot Kronreif with ACMIT Gmbh, Wiener Neustadt, Austria, gernot.kronreif@acmit.at Contact author: Fabio Falezza, fabio.falezza@univr.it Fig. 1. The proposed methodology from a perspective of knowledge integration (top-down and bottom-up approaches) and required technical skills. This approach is the one followed in the EU funded SARAS project (www.saras-project.eu). and robots. Two different approaches can be adopted to model the medical knowledge from the surgeons: a top-down and a bottom-up approach. The top-down approach is based on encoding prior knowl- edge into a formal representation understandable by both hu- mans and machines. Different approaches have been proposed, like description logic [4], formal ontologies [5], or defeasible reasoning [6]. Statecharts models are a graphical specification formalism that allows the nesting of Finite State Machines (FSMs hierarchy), their orthogonality (FSMs parallelism) and re-usability of components [7], [8]. The major advantage brought by FSMs is that they can be formally verified [9], and, therefore, are always guaranteed to operate according to their design. For this reason, FSMs are widely employed in the representation of mission-critical workflows, such as the case for surgical procedures. The representation power of statecharts has been exploited to build a discrete-event simula- tion model of the pre-operative process [10]. The progress of individual patients through surgical care is decribed as series of asynchronous updates in patients’ records; these updates are triggered by events produced by parallel FSMs that represent concurrent clinical and managerial activities. The bottom- up approach tries to infer a model from raw data through data analysis techniques, such as deep learning, possibly in an unsupervised, end-to-end manner to speed-up the process and to avoid labeling bias [11]. In this work, we adopt a safer approach that follows the engineering stack guidelines for which the top down model is adapted in its formulation to the events based on the available observations on both the environment and the robots [12]. This improves both