Open access peer-reviewed chapter - ONLINE FIRST

Role of Machine and Deep Learning in the Surgical Domain

Written By

Dharmendra Kumar Pipal, Rajendra Kumar Pipal, Vibha Rani Pipal, Prakash Biswas, Vikram Vardhan, Seema Yadav and Himanshu Jatoliya

Submitted: 24 December 2023 Reviewed: 06 May 2024 Published: 17 June 2024

DOI: 10.5772/intechopen.115071

Artificial Intelligence in Medicine and Surgery - An Exploration of Current Trends, Potential Opportunities, and Evolving Threats - Volume 2 IntechOpen
Artificial Intelligence in Medicine and Surgery - An Exploration ... Edited by Stanislaw P. Stawicki

From the Edited Volume

Artificial Intelligence in Medicine and Surgery - An Exploration of Current Trends, Potential Opportunities, and Evolving Threats - Volume 2 [Working Title]

Dr. Stanislaw P. Stawicki

Chapter metrics overview

11 Chapter Downloads

View Full Metrics

Abstract

In recent times, the application of artificial intelligence (AI) has become increasingly prevalent across various industries. This can be attributed to advancements in learning techniques, such as deep learning, as well as significant improvements in computational processing speed. Artificial intelligence (AI) is increasingly utilised in the medical field for tasks such as medical image recognition and the analysis of genomes and other omics data. In recent times, there has been notable progress in the development of artificial intelligence (AI) applications for videos depicting minimally invasive surgeries. Consequently, there has been a surge in research studies focusing on exploring and enhancing these applications. The selected studies in this review encompass various topics, including the identification of organs and anatomy, instrument identification, recognition of procedures and surgical phases, prediction of surgery time, identification of an appropriate incision line, and surgical education.

Keywords

  • deep learning
  • artificial neural networks
  • support vector machines
  • neurons
  • machine learning
  • frameworks

“Anything that could give rise to smarter-than-human intelligence—in the form of Artificial Intelligence brain-computer interfaces or neuroscience-based human intelligence enhancement - wins hands down beyond contest as doing the most to change the world. Nothing else is even in the same league.”

—Eliezer Yudkowsky

Advertisement

1. Introduction

Machine learning (ML) and artificial intelligence (AI) are poised to revolutionise healthcare by enhancing patient access, surgical training, clinical results, and illness diagnostics. These technologies can assess surgeons’ technical proficiency, track their physical activity, eye movements, and cognitive function, detect instrument motion, and identify patterns in video records, making them valuable tools in the surgical industry. The field of AI focuses on the study of algorithms that empower machines to carry out cognitive tasks, including problem-solving, object recognition, and decision-making. Initially considered science fiction, AI has gained popularity and academic interest due to its practical applications, such as IBM’s Watson and Tesla’s autopilot [1]. Improved robotic surgical training is another benefit of these modalities. The da Vinci Standard Surgical System created a recording and playback system to assist trainees in receiving tactical feedback to improve their operating precision [2]. Better and faster diagnosis and treatment are possible because of machine learning and AI’s ability to correctly detect and classify patterns in diagnostic images and diseased tissues. Artificial neural networks in telemedicine allow for the distant provision of healthcare by evaluating symptoms, test findings, medical imaging, and examination data to ascertain the likelihood of a diagnosis or prognosis [2].

The advent of the Information Age has brought about a significant transformation in workflow and productivity, akin to the impact of the Industrial Revolution. This transformation has had positive implications for the field of surgery. However, excessive promotion and exaggerated claims commonly associated with emerging technologies often overshadow the genuine potential of artificial intelligence (AI). The use of AI and ML technologies in the surgical field faces limitations due to the large data sets required for algorithm programming and the risk of misclassifying data points that deviate from the system’s regular trends.

Machine learning is the process by which computers acquire knowledge from extensive datasets and autonomously construct algorithms and models to carry out tasks, such as classification and planning. Phase recognition is a technique that identifies operation steps and phases, and machine learning (ML) is a technology that aids in the analysis and interpretation of large data volumes, crucial for workflow optimisation, surgical training, intraoperative assistance, patient safety, and efficiency [3]. ML for surgical phase recognition can be highly accurate, depending on the model, data type, and surgery complexity. However, manual expert annotation is still required for training. Future ML models may standardise, improve efficiency, and enhance patient outcomes by promoting objectivity and efficiency in surgical workflow. There is a wide variety of artificial intelligence (AI) technologies (Table 1) that are being used by healthcare companies. Machine learning (ML), computer vision (CV), reinforcement learning (RL), fuzzy logic, robotics, and cybernetics are some of the approaches that fall under this category. Using algorithms and pattern recognition, machine learning (ML) can analyse data from the actual world and produce an estimate [4, 5, 6]. By gaining experience and accumulating additional data, the algorithm of the machine (whether it is supervised or unsupervised) may acquire the ability to differentiate between different datasets and provide more precise predictions [5]. ML is already being used in a variety of domains, including imaging-based diagnosis, sickness severity assessment, and health policy and planning [7].

TypesMechanism of workingUsefulness in real surgical scenario
Machine learning (ML)An un programmed subset of artificial intelligence that automates analytical modelling and improves resultsTo make predictions, assess accuracy levels, and analyse different treatment alternatives and outcomes
Artificial neural networks (ANN)Creates an environment similar to a “biological neuronal synapse” by the use of a network of interconnected computer processors that process dataSubset of an ML
Reinforcement learning (RL)To enhance the cumulative advantage notion, a method of experimentation and failure is used to find a solutionSubset of an ML
Deep learning (DL)The method of replicating the “human brain” makes it feasible to progressively extract higher-level characteristics from the information that has not been processed, and this may be done with either minimal or no supervision by a person. To accomplish this, an attempt is made to reconstruct the “human brain”Subset of an ML
Natural language processing (NLP)Enable machines to assist in figuring out, interpreting, and transforming human language to get facts and insightsEnhances the efficiency of the manual evaluation process, particularly in the records system and for identifying quality deficiencies
Computer vision (CV)Because it emulates the “human visual system.” it can extract relevant information from digital photos, movies, and other visual stimuliObtaining and analysing visual images and videos to support image-guided and virtual surgery
Fuzzy logicsWhen dealing with ambiguous and imprecise information, it is common to provide degrees of likely possibilities instead of a clear yes-or-no response to address the problemFacilitate the process of decision-making and the evaluation of the performance of healthcare practitioners
Robotics and cyberneticsTransdisciplinary approach to building automatic control systemsWith a minimally intrusive method, providing training, navigation, and assistance to healthcare providers

Table 1.

Various categories and uses of artificial intelligence in the field of surgery.

Artificial neural networks (ANN) is an advanced ML [8] network of linked computer processors known as “neurons”, which may analyse incoming data and produce an output, much like the human brain [5]. Before sending the data to the next level, each node can receive input and retain a small amount of information about it. Consequently, the knowledge of the information is progressively enhanced from one layer to the next [9]. Several layers of neurons are coupled to one another, which has contributed to the creation of DL [10]. This particular subfield of machine learning includes a large number of intricate algorithms that are capable of determining the essential characteristics of a model without the need for human intervention [6]. Speech recognition and object identification are two instances of deep learning [11]. To aid medical practitioners in making a diagnosis and treatment plan that is more expedient, it has been suggested that DL might enhance the objectivity of diagnosis in the area of pathology. The identification of prostate cancer and the detection of lymph node metastases in breast cancer patients have both shown this to be the case [12].

Further advancements in technology have resulted from the use of various branches of AI. For a wide variety of surgical procedures, intraoperative video may be evaluated using CV, which can assist in the accurate identification of surgical phases (steps) and instruments. Under some conditions, CV is even capable of detecting surgical phases with the same level of precision as surgeons [13]. The use of natural language processing (NLP) to extract surgical outcomes from electronic health records, especially postoperative difficulties, is rising across disciplines. NLP may be preferable to conventional non-NLP techniques in some respects when it comes to ruling out the documenting of surgical outcomes [14]. By proposing activities at specified intervals, reinforcement learning has the potential to aid in surgical decision-making. This is because it replicates the human trial-and-error learning process, which is used to compute the optimal recommendation policies [15]. A sensitive approach that applies the concepts of contingent probability and requires the detection of intermediate logical values, fuzzy logic inference has been used to predict patients who will not improve with surgical intervention [16]. This method has been used to find patients who would not improve with the intervention. Additionally, cybernetics and robotic surgeries are examples of transdisciplinary fields that have the potential to contribute to the enhancement of the capability to perform certain surgical operations safely [17].

Advertisement

2. Methodology

We performed a comprehensive literature search using Google™ Scholar and PubMed for this narrative review. The most searched term was “surgery,” followed by “artificial intelligence,” “machine learning,” “technology,” and “subspecialty.” Specialisations in surgery including laparoscopic surgery, orthopaedics, gynaecological procedures, anaesthesia, and vascular surgery were mentioned. Utilising the term “surgery” with each keyword in various repetitions resulted in more than 585,000 records. Literature screening prioritises English “full text” sources only. Letters to the Editor and Brief Communications were also eliminated. Approximately 142,000 secondary literature findings were acquired. We concentrated our search on original studies and reviews that had a minimum of three citations, using Google Scholar. Our final selection of suitable articles met the requirement with less than 1000 items. We found 96 relevant articles during our further assessment of tertiary article filtering. Secondary materials were included after a thorough assessment of our 70 most relevant articles and their bibliographies. The reference list has 38 citations using the described method (Figure 1).

Figure 1.

Methodology.

Advertisement

3. Deep learning applications in surgery

Deep learning is a powerful tool used in image recognition, face recognition, and automated driving tasks. It enhances the capabilities of robots by detecting their surroundings and determining actions. AI is also being used in the medical field, utilising images like X-ray, CT, and ultrasonic for disease diagnosis. This technology helps predict patient prognosis based on medical records, demonstrating its potential in various fields, including robotics and medicine. The advancement of deep learning has brought about changes in the field of AI. Mathematical models resembling brain processes are called neural networks, and deep learning is a machine learning technique that combines numerous layers of neural networks to improve expressiveness and learning capacity [18, 19]. Surgical domains rapidly incorporate the machine learning subfield known as deep learning (DL). The substantial capacity for solving problems based on data has resulted in computational progress across various fields, with the medical and surgical fields emerging as prominent models. The capability to recognise patterns and extract features from vast quantities of complex data is facilitated by the multi-layer architecture of connected neural networks in DL. Deep learning (DL) is being utilised in various surgical specialties to enhance intraoperative performance and preoperative planning, providing surgeons with innovative methods to improve patient outcomes and safety [20].

3.1 Analysis of laparoscopic surgery video using deep learning

Recognising surgical features is a crucial aspect of developing autonomous surgical robots. Deep learning has been utilised in several research areas to improve laparoscopic and robot-assisted surgical procedures [21, 22]. The following sections elaborate on several points:

  • The proper identification of organs and instruments

  • The recognition of procedure and surgical phases

  • The safe navigation of surgical procedures

  • Surgical Education

3.2 Organ and instrument identification

Prior research has described the use of deep learning in the interpretation of laparoscopic images, especially for organ and structural identification. Madad Zadeh et al. [22] manually tagged the uterus and ovaries in 461 gynaecological laparoscopic videos. Using the deep-learning technique Mask Regional Convolutional Neuronal Network (Mask R-CNN), the uterus, ovaries, and surgical equipment were automatically segmented. The overlap percentage between the manually divided regions and the Mask R-CNN was used to assess segmentation accuracy. The uterus had an accuracy score of 54.5%, the ovary had a score of 29.6%, and the surgical equipment had an accuracy score of 84.5%. Robotics and imaging technologies have significantly increased the field of robot-assisted surgery, enabling surgeons to perform precise, minimally invasive procedures using vision, haptics, and robot arm motions. Real-time semantic segmentation of robotic tools and tissues is crucial for accurate and efficient surgical scene segmentation. Islam et al. [23] developed a lightweight cascaded convolutional neural network to distinguish surgical tools from high-resolution commercial robotic system films. They introduced a multi-resolution feature fusion module and a unique approach to regularise the segmentation model by combining auxiliary and adversarial losses. The model also includes a lightweight spatial pyramid pooling unit for collecting rich contextual data. The model outperforms pixel-wise segmentation techniques in terms of high-resolution video prediction accuracy and segmentation time.

Mascagni [24] developed a deep-learning algorithm to assess safety criteria for laparoscopic cholecystectomy (LC) and segment the gall bladder and liver in laparoscopic images autonomously. The algorithm considers the critical view of safety (CVS) to ensure safe LC. The algorithm considers three points: Rid Calot’s triangle, strip the gallbladder’s neck from its supporting plate, and verify the gallbladder’s connection to the body exclusively by the cholecystic duct and cholecystic artery. The study used 2854 annotated photos and 402 segmented images from 201 LC recordings. A deep neural network segmentation model was developed, with an average accuracy of 71.9% in predicting CVS criterion achievement. This model has the potential to improve future diagnoses and ensure safety during LC procedures. In their study, Padovan et al. [25] employed deep learning techniques on laparoscopic surgery videos to generate precise 3D models. To ensure the accuracy of their results, they superimposed the acquired data on real organ images. Their approach yielded impressive results, with an accuracy rate exceeding 80% across all conducted tests. The objective of the study conducted by Koo et al. [26] is to create an augmented reality system for liver surgery that streamlines the registration of a 3D preoperative CT model with 2D laparoscopic images. This advancement is expected to enhance surgeons’ proficiency in identifying internal anatomy during the procedure. The system facilitates the automated generation of a three-dimensional model, as well as the detection and delineation of the hepatic limbus in computed tomography (CT) images and two-dimensional laparoscopic video images. Therefore, the integration of laparoscopic video and 3D CT imaging holds promise for prospective application in preoperative simulation.

The area of laparoscopy requires a serious commitment to education and training. It is also possible to enhance learning with the help of surgical streaming videos. By automatically identifying surgical equipment in every video frame, the LapTool-Net technology drastically cuts down on time and money spent on the procedure. The study by Namazi et al. [27] introduced LapTool-Net, an AI model developed for deep learning analysis of laparoscopic surgical videos. With agreement rates of 80% or higher, their model proves that it can correctly identify various surgical equipment in each frame of a laparoscopic video. The analysis procedure is made much more precise and accurate with the help of LapTool-Net. The LapTool-Net model outperformed expectations on the Cholec80 and M2CAI16 benchmarks, thanks to its training using publicly available laparoscopic cholecystectomy datasets. The online mode achieved an accuracy rate of 80.95%, while the offline option achieved an accuracy rate of 81.84%. Also, 88.29% and 90.53% of the classes had F1 scores. Despite having a smaller training dataset and a less sophisticated architecture, the researchers found that LapTool-Net outperforms existing approaches. This proves how well its context-aware model works. Unlike other approaches, it does not need expert knowledge and can improve upon current ones. Yamazaki et al. [28] retrospectively studied 100 video recordings of infra pyloric and 33 D2 supra pancreatic lymphadenectomy during laparoscopic gastrectomy for gastric cancer. The system’s accuracy was assessed by comparing automated and manual use time with qualified and non-qualified surgeons’ surgical equipment usage patterns. The research demonstrated similar automated and manual detection times for diverse devices. In infrapyloric lymphadenectomy, non-qualified operators used dissector forceps and clip appliers longer than qualified operators. The usage time proportions of energy devices, clip applier, and grasper forceps were different in supra pancreatic lymphadenectomy. The only one with a larger usage time percentage was grasper forceps [28]. A study by Aspart et al. [29] has developed a new computer-assisted surgery technique for laparoscopic cholecystectomy, which uses real-time feedback to improve instrument visibility during surgery. The researchers presented a dataset of 300 laparoscopic cholecystectomy videos annotated with clipper tip visibility. They introduced ClipAssistNet, an image classifier using neural networks to identify clipper point visibility in individual frames. The model accurately classifies the visibility of the clipper point within an image, with an area under the receiver operating characteristic (AUROC) of 0.9107, a specificity of 66.15%, and a sensitivity of 95%. The technique can execute real-time inference at 16 frames per second (FPS) on an embedded computing device, making it suitable for operating room environments.

3.3 The recognition of procedure and surgical phases

Four medical centres contributed 163 laparoscopic cholecystectomy videos to the study conducted by Cheng et al. [30]. After training on 90 surgical videos, a deep-learning model was put to the test in 10 more videos with the task of identifying phases. Accuracy, precision, recall, and F1 score were the metrics used to evaluate performance. The model analysed 63 additional videos to identify distinct stages, coupled with its excellent overall accuracy. Laparoscopic cholecystectomy achieved an overall phase recognition accuracy of 91.05% and a mean concordance correlation coefficient of 92.38% for surgeon comments throughout the whole operation. There was a large variation among individuals, with an average operation time of 2195 ± 896 seconds. The hepatocystic triangle mobilisation phase of laparoscopic cholecystectomy took a lengthy time in patients with acute cholecystitis. With further improvements, they found that the deep-learning model correctly recognises laparoscopic cholecystectomy stages using data from many locations. This suggests that AI can analyse large-scale surgeries for therapeutically useful purposes. The collection, storage, and organisation of intraoperative video data are made easier by dividing the surgical process into phases. Nevertheless, organising data manually is a tedious process. To automate the surgical processes of a transanal complete mesorectal excision, Kitaguchi et al. [31] created a deep-learning model. The model achieved an overall accuracy of 93.2% [31]. Using manually annotated data, Kitaguchi et al. trained a model to recognise the processes and procedures of a laparoscopic sigmoid colon resection using CNN-based deep learning. There was a 91.9% success rate in identifying the 11 distinct surgical procedures [32].

Endometriosis is a gynaecological condition that affects 6–15% of women of reproductive age. One of the signs is deep infiltrating endometriosis of the gastrointestinal tract. As a last resort, surgical therapy—which often entails resection—may be required for these patients. To make the surgical site green, fluorochrome indocyanine green (ICG) is injected there since the anastomosis’s effectiveness relies on the amount of blood perfusion there. The study conducted by Hernández et al. [33] proposes deep learning techniques to assess the level of blood perfusion at the anastomosis. Using a U-Net-based deep learning system, models were created to autonomously separate the intestine from surgical recordings. Second, segmented video frames were used to quantify blood perfusion. After characterising frames using textures and nine first- and second-order statistics, two trials were conducted. The first trial evaluated the perfusion changes between the two anastomosis sections, and the second showed that texturing could capture ICG fluctuation. The best segmenting model has 0.92 accuracy and 0.96 dice coefficient. They found that U-Net segmentation of the bowel was effective, and the textures are suitable descriptors for blood perfusion in ICG pictures. This may help medical professionals anticipate postoperative challenges and act on them. Planning the operating room (OR) efficiently depends on accurate operation length estimates, which are critical for patient safety and comfort and for making the most efficient use of available resources. It is, however, difficult to determine surgery length before the procedure since it varies greatly depending on the patient’s health, surgeon’s abilities, and intraoperative scenario. Remaining Surgery Duration Net (RSDNet), a deep learning pipeline by Twinanda et al. uses laparoscopic recordings to predict intraoperative surgery time, overcoming the limitations of manual annotation and the high cost and time consumed by traditional RSD prediction methods [34]. RSDNet, a scalable approach, does not require manual annotation during training, making it applicable to various surgeries. Its generalizability is demonstrated through testing on large datasets like 120 cholecystectomy and 170 gastric bypass videos. The proposed network outperforms a traditional method of estimating RSD without manual annotation.

3.4 The safe navigation of surgical procedures

Depending on the surgeon’s extent of experience, one of the most challenging aspects of surgery is making the incision in a safe region, which is one of the most crucial considerations. This led to the publication of other reports detailing efforts to create AI-powered surface navigation systems for risk-free incisions.

Igaki et al. [35] followed 32 patients who had laparoscopic left-sided colorectal resections performed between 2015 and 2019 at the National Cancer Centre Hospital East in Chiba, Japan. Areolar tissue in the plane of the entire mesorectal excision was analysed using deep learning-based semantic segmentation in this study. The model was trained using intraoperative pictures captured during left colorectal resection videos. The research used the dice coefficient to measure the accuracy of semantic segmentation. To gain a better understanding of the areolar tissue in the plane of the entire mesorectal excision, the semantic segmentation model was utilised. The amount and quality of the training data determine the model’s accuracy and generalisation performance; more photos are required for better recognition accuracy. By using an areolar tissue segmentation method, they developed an extremely precise image-guided navigation system for a complete mesorectal excision plane. The whole mesorectal excision plane should be better identified for dissection if this is the case. Artificial intelligence (AI) is anticipated to augment the cognitive abilities and expertise of surgeons via the prediction of anatomical features in the surgical field. An automated method for segmenting loose connective tissue fibres (LCTFs), that creates a safe dissection plane—was the goal in developing this deep-learning model in a study conducted by Kumazu et al. [36]. The video frames showing a robot-assisted gastric bypass surgery were annotated by medical professionals. The segmentation findings were produced using a deep-learning model that was built on top of U-net. The model was evaluated using 20 randomly sampled frames and a two-item questionnaire. The model showed high recall scores, acceptable spatial overlap, and a mean sensitivity score of 3.52. Surgeon evaluators gave a mean sensitivity score of 4 and a mean misrecognition score of 0.14, indicating few acknowledged over-detection failures. The model’s performance was compared with ground truth and a two-item questionnaire. They concluded that deep-learning algorithms can predict anatomical details in intraoperative recordings, benefiting surgeons. This can reduce adverse events and aid real-time decision-making. As more sophisticated image segmentation algorithms become available, they will enhance performance and safety in surgical sectors.

3.5 Surgical education

Artificial intelligence (AI) has the potential to increase patient safety during surgical operations. The use of AI would have a substantial positive impact on all aspects of the industry, including training and education. Surgical programme directors were given information on students’ levels of intrinsic ability to allow the deployment of flexible personalised instruction using deep-learning models. These models were used to forecast proficiency acquisition rates in robot-assisted surgery (RAS). In research, Moglia et al. [37] compared ensemble deep neural network (DNN) models with other ensemble AI methods, such as random forests and gradient-boosted regression trees (GBRT), after training 176 medical students on surgical simulators to complete five tasks on a virtual simulator for RAS. Based on the findings, they found that DNN models performed better in time-to-proficiency prediction than both random forests and GBRT, with accuracy rates of 0.84 and 0.77, respectively. Ensemble DNN models also beat random forests and GBRT in estimating the number of tries to proficiency, with accuracy rates of 0.87 and 0.83, respectively. These findings illustrate the efficacy of DNN models in predicting proficiency, as well as the necessity for more study and improvement in this field. So, ensemble DNN models may detect trainees who are having trouble achieving the expected surgical technical skill early on [37]. Because high levels of stress impair a surgeon’s performance, Zheng et al. [38] developed a deep-learning model that can identify in real-time if a surgeon is experiencing strain while doing surgery. The findings of this study hint at possible stress-sensitive approaches that might be utilised in the future to manage patient anxiety during robotic-assisted procedures. Their study included 30 medical school students, half of whom were placed in a control group and half in a stress group. Under deteriorating circumstances, the stressed group performed the modified peg transfer challenge. A Long-Short-Term-Memory recurrent neural network (LSTM) with an attentional focus was created to distinguish between normal and stressful trials. Frame-wise categorisation of normal and stressful movements was done using keyframes from each trial. The work detects stress-affected motions and confirms the use of LSTM and kinematic data for frame-wise stress detection during laparoscopic training. For stress management, the suggested classifier may be included with platforms for robot-assisted surgery.

Advertisement

4. Role of machine learning and artificial intelligence in anaesthesia

Six themes of AI applications in anaesthesiology were found and outlined in a review of the field’s crossover with anaesthesia research [39]:

  1. Electroencephalography (EEG) and the bispectral index (BIS) are used to monitor the depth of anaesthesia.

  2. Control of anaesthesia for the purpose of administering neuromuscular blockade, anaesthesia, or other relevant medications.

  3. Predicting events and risks, such as complications, duration of stay, consciousness, etc.

  4. Neural networks were the most often used technique to achieve ultrasound picture categorisation, and ultrasound guiding was used to help with the performance of ultrasound-based treatments.

  5. Pain management including nociception level index based on machine learning analysis of photoplethysmograms and skin conductance waveforms, opioid dosing prediction, patient identification, and identification of patients who might benefit from preoperative consultation with a hospital’s acute pain service, and

  6. Operating room logistics, which include planning the OR’s schedule or monitoring the movements and actions of anaesthesiologists [39]. By employing combinations of the following keywords: machine learning, artificial intelligence, neural networks, anesthesiology, and anaesthesia; to search through all 173 English-language publications published in the Medline, Embase, Web of Science, and IEEE Xplore databases between 1946 and 30 September 2018, several topics within artificial intelligence (AI) were described and summarised, including machine learning (supervised, unsupervised, and reinforcement learning), AI techniques (e.g., classical machine learning, neural networks and deep learning, Bayesian methods), and major applied fields in AI. The terms machine learning, artificial intelligence, neural networks, anaesthesiology, and anaesthesia were all mentioned. In order to control the infusion rates of propofol (in a simulated patient model), Padmanabhan et al. [40] created an anaesthesia controller employing reinforcement learning and input from a patient’s bispectral index (BIS) and mean arterial pressure (MAP) (Table 2).

PhaseUtilisation
Before the surgeryRisk classification for result prediction and an impact on anaesthetic method during preoperative assessment
Classify the patients into low, medium, and high-risk categories with accuracy (broadly corresponding with ASA grade)
During the surgeryIdentification of spinal landmarks with automated ultrasonography (USG) in neuraxial blockade
Enhanced first-pass spinal success and interpretation of spinal USG by employing an advanced image processing method to recognise spinal landmarks
Prediction of post-induction/intraoperative hypotension by using an early warning system derived from machine learning to shorten the length and depth of intraoperative hypotension
Creating a machine learning system that can forecast post-intubation hypoxia to the same degree as specialists in medicine
Using a deep learning model to monitor and manage the degree of sedation or hypnosis and anticipate the bispectral index (BIS) response during target-controlled infusion (TCI) of propofol and remifentanil
After the surgeryUsing perioperative data and patient variables, a deep learning model predicts postoperative in-hospital mortality to estimate 30-day mortality
Machine learning-based analgesic response prediction and automated pain state (high and low) classification based on EEG data

Table 2.

Based on examples of recent research, possible uses of artificial intelligence in anaesthetic practice [41].

Advertisement

5. Artificial intelligence and machine learning’s place in regional anaesthesia

Since 2000, there has been a notable advancement in regional anaesthesia with the introduction of ultrasound guidance. There have been better results when using USG for regional anaesthesia. However, there is no evidence as of yet that nerve damage has become less common. A joint committee of the European Society of Regional Anaesthesia and Pain Therapy (ESRA Europe) and the American Society of Regional Anaesthesia and Pain Medicine (ASRA Pain) developed the following four suggestions for teaching and training in ultrasound-guided regional anaesthesia [42]:

  1. Comprehending the functioning of the instrument

  2. Optimising the image

  3. Interpreting the image by identifying and analysing the anatomy using USG

  4. By maintaining needle visualisation, orienting the needle probe in the right way, and maintaining an ideal anatomical view while moving the needle in the direction of the target object, one can observe the insertion of the and injection of drug. In 93.5% of cases (1519/1624), the artificial intelligence models [43] correctly recognised the structure of interest. The false-positive rate was 3.5% (57/1624) and the false-negative rate was 3.0% (48/1624).

Advertisement

6. Management of paediatric airways by artificial intelligence and machine learning

Adults with problematic airways can be predicted by machine learning, but there is not enough data for youngsters. Since children frequently resist physical examinations, accurate airway assessments are hampered, making it difficult to identify and predict the problematic airway in children [44].

The physical examination of the airways is still the primary method used by anaesthesiologists to determine whether intubation would cause any difficulties. Pre-anaesthesia assessment of paediatric airways can, however, be done using 2- or 3-dimensional face analysis to anticipate children’s problematic airways. As cutting-edge instruments, artificial intelligence, and machine learning can help in paediatric airway management. During video laryngoscopy intubation, artificial intelligence provides clinicians with real-time clinical decision support.

Advertisement

7. Artificial intelligence in operating room (OR) management

The operating room is the centre of healthcare, and providing excellent surgical treatment depends on effective resource management, staffing, and equipment in the OR. In a recent systematic review, 22 research from February 2019 to September 2023 were reviewed to look at the use of AI, especially machine learning, in operating room administration [45]. The post anaesthesia care unit (PACU) resource allocation, surgical case durations in perioperative medicine, and surgical case cancellation detection were all impacted by AI and ML. ML methods like XGBoost, random forest, and neural networks can significantly increase prediction accuracy and resource usage. However, there are still issues that need to be seriously addressed, such as data access and privacy concerns. For AI integration in OR administration to improve healthcare efficiency and patient outcomes, the transformational potential of AI for administrators, practitioners, and patients must be continuously reinvented. Machine learning has shown effectiveness in OR jobs, surgery length prediction, schedule optimisation, and resource optimization. The OR efficiency was redefined by machine learning algorithms, such as decision trees and random forests, which promised more precise forecasts and proactive decision-making. The most recent machine learning advancements in perioperative medicine, OR productivity, and patient care are encouraging. A tried-and-true tool is the critical role that good communication and collaboration amongst data scientists, engineers, and clinicians play. The remaining surgical period is predicted by modular artificial neural networks (MANN) [46].

For certain clinical applications, MANNs are found to be useful for their external memory, which excels at tasks requiring context and sequential reasoning. Anaesthesia records from a diverse range of surgical populations and hospital types confirmed the robustness and adaptability of their model. MANN has the potential for cost savings and operational efficiency improvements [45] by consistently outperforming the Bayesian statistical approaches, particularly during the last quartile of surgery. Additionally, the generalizability and transferability of the MANN model were evaluated in the study, indicating that fine-tuning a model learned at bigger nearby systems may be beneficial even for healthcare systems with smaller operating volumes. There is a need for improvement regarding the anaesthetic record’s deficiency of important information throughout specific surgical stages. To overcome current obstacles and fully benefit from AI for healthcare administrators, practitioners, and patients above all, more research, collaboration, and innovations are urgently needed. Future improvements in patient outcomes and healthcare delivery are anticipated as a result of the use of AI in OR management.

Advertisement

8. The future of intensive care unit (ICU)

Individualisation of macrovascular and microvascular parameters is determined by hemodynamic management [47]. To obtain an extra level of individualization in hemodynamic resuscitation at the organ and tissue levels, clinicians must understand how resuscitation affects the microcirculation of the organs. The removal of observer bias and the provision of information on microvascular targeted therapy options are made possible by automated analysis and the integration of AI into analysis tools. Therefore, integrating microcirculation analysis into hemodynamic resuscitation serves a crucial role in minimising organ dysfunction and improving the fate of critically ill patients by fostering carer trust and monitoring the microcirculation.

Advertisement

9. Artificial intelligence for prevention of intraoperative anaphylaxis

Artificial intelligence has been utilised for many years to identify and cure illnesses, including the prevention of intraoperative anaphylaxis caused by medicines and other medical compounds. Different pharmaceutical medications are used during procedures including surgery and anaesthesia (both local and general). The doctor uses the technique safely and without incident most of the time. While they are safe to use, some people may experience negative side effects that range from minor to fatal. When estimating the chance of anaphylaxis before undergoing a medical operation, AI could be a helpful tool. Adults are more likely than children to experience perioperative events during anaesthesia induction, including anaphylaxis. Six categories (ABCDEF) are used to categorise these reactions:

  1. A—Augmented, due to dosage

  2. B—Bizarre, not related to dosage

  3. C—Chronic, related to time and dose

  4. D—Delayed, or related to time

  5. E—End of use, meaning withdrawal, and

  6. F—Failed, meaning failure of treatment

Reactions can be immune-mediated (IgE-mediated) or non-immune (IgG-mediated) at times, or non-immune in nature.

Any unexpected and undesirable side effect of a medication that appears at dosages used for treatment, diagnosis, or prevention is defined as an adverse drug reaction (ADR). There are two categories of adverse drug reactions (ADRs): predictable and unpredictable. Both healthy persons and those with genetic variability experience predictable reactions, which are dose-dependent and linked to the medication’s established pharmacologic effects. They are classified as side effects, secondary effects, overdose, and drug interactions, and they make up around 80% of all ADRs. Unpredictable reactions are mostly observed in vulnerable people and are unrelated to the drug’s dosage or pharmacologic effects.

They are separated into three categories: drug allergy, non-allergic reactions that arise from the direct release of mediators from mast cells and basophils, and drug intolerance, which is an unpleasant pharmacologic consequence that happens at low or sub-therapeutic levels.

The advancements in surgical and anaesthetic techniques have been thoroughly examined in the last several decades. To carry out safe and effective processes, a variety of drugs are used. ADRs, including potentially fatal ones, can happen to a small percentage of patients. This needs to be constantly remembered by the surgical team, and the assessment and resolution of the issue requires the involvement of a multidisciplinary team that includes an allergist, anaesthesiologist, and surgeon. To prevent these kinds of incidents, an AI programme may hold the key to a more accurate assessment of each patient’s risk of anaphylaxis [48].

10. Automatic pain assessment (APA) using artificial intelligence

Complicating the assessment of pain can be factors such as personal capacity to face discomfort, behavioural components, emotional factors, and lifestyle. It takes a professional pain assessment to counteract the limitations of self-reported pain levels. The use of data-driven AI techniques is essential for APA research. There are several AI-based techniques for detecting pain, such as: (1) behavioural-based techniques that use body language, facial expressions, and linguistic analysis; and (2) neurophysiology-based techniques that use body movements as non-verbal physical indications of pain.

These techniques aid in the creation of standardised, objective, and broadly applicable tools for the assessment of pain in various clinical settings [49].

11. Applications of artificial intelligence in perioperative medicine: the Perioperative Intelligence

Even though intraoperative mortality has decreased dramatically over the past few decades, postoperative morbidity is still significant, and surgical care is expensive overall. New technologies like big data, artificial intelligence, and machine learning are necessary for safe and appropriate perioperative care. AI offers a foundational framework for cooperative efforts to produce Perioperative Intelligence that is safe, efficient, and reasonably priced. Surgical care is costly overall, and postoperative morbidity is still substantial, even though intraoperative mortality has dropped significantly in recent decades. Big data, AI, and machine learning are examples of new technologies that are essential to providing safe and effective perioperative care. AI provides a fundamental framework for collaborative efforts to generate cost-effective, safe, and efficient Perioperative Intelligence.

The areas that require attention are:

  1. Identifying patients who are at risk

  2. Identifying issues early, and

  3. Providing prompt and efficient treatment [50].

12. Anaesthesia and perioperative medicine over the next 25 years: artificial intelligence, data analysis and telemedicine

Knowledge was doubling every 25 years by the end of World War II, as opposed to 1900, when studies indicated that knowledge doubled every century. However, it can now be expressed in days or, in certain sectors, months. On the other hand, perioperative knowledge is comprised of a variety of elements, including experience, wisdom, research, and the art or craft of our work. Furthermore, these qualities are necessary for all anaesthesiologists to perform their primary duty of providing excellent perioperative care while collaborating with other specialists and allied health professionals to deliver their principal role: superlative perioperative care [51].

The development and implementation of artificial intelligence (AI), together with its subfields of machine learning and deep learning, has completely changed the world of medicine by enabling quick data analysis, helping with preoperative assessment, assessing preoperative risk stratification, and recognising patients who are worsening. With complex sensors, closed-loop feedback systems can monitor neuromuscular blockade, detect nociception, and adjust the degree of anaesthesia. Similar technology helps to regulate plasma blood glucose during nutritional support and blood pressure during neuraxial anaesthesia for caesarean sections. The same could soon be modified for application in more perioperative contexts. AI is capable of identifying subtle structural anomalies in radiography and histology. It can also be used to determine the normal structures for echocardiography and ultrasound-guided regional anaesthesia (UGRA).

Trainees can be telementoring to learn new skills through the use of alternate reality learning systems, such as virtual and more recently, augmented reality. The delivery of major surgical procedures has been changed by the widespread deployment of robots, which also holds promise in the field of anaesthesia. For instance, a video-endoscopic style is used in automated endotracheal intubation (ETI) to move a tracheal tube positioned over its shaft into the trachea. Robotic or manual control is available for the necessary flexing action of the endoscope tip to approach the glottic opening.

Currently, intraoperative care is managed and contingent upon the watchfulness of an anaesthesiologist present in the operating room. However, probably, certain tasks related to drug administration, blood pressure regulation, depth of anaesthesia, and neuromuscular blockade will be managed by computers directly. Furthermore, the development of portable, quick blood or breath tests for propofol concentrations may benefit patient monitoring; these tests may even replace or supplement processed electroencephalographic analysis. High-quality and thorough documentation of prior anaesthetics (such as video laryngoscopic pictures) may be produced by the electronic patient record (EPR), and other hospitals will be able to use it for training and research purposes.

The theoretical and practical skills of anaesthetists will always be in high demand inwards, pain clinics, intensive care units, and operating rooms. However, to take advantage of new medications and equipment, new skills must be learned. They will also likely have various appraisal standards, take separate exams, and collaborate more with non-medical staff members at the same time. The future is still unknown, but the path the specialism is currently taking—bringing on top-notch junior colleagues—should reassure us all that we will be prepared for the next 25 years.

13. Limitations and ethical implications of artificial intelligence

Artificial intelligence may eventually cause disillusionment due to unrealistic expectations from both medical and non-medical people. For physicians, patients, and regulators alike, various AI technologies are bringing about a revolution in healthcare.

It is important for us to understand that artificial intelligence-based technologies may or may not produce better classification, prediction, or final product than existing methodologies. When AI is applied appropriately to address a pertinent issue or provide a suitable response, it should be handled with prudence. An additional difficulty with artificial intelligence, especially with neural networks, is that their techniques may provide “black box results,” meaning that although an algorithm may predict something for a researcher or clinician, it is unable to provide more information about how or why it was predicted.

To enhance the transparency of algorithms, additional research and ongoing enhancement of the current tools are required.

14. Conclusion

Due to its intricacy, laparoscopies call for specific education and assessment. Surgical education can be made even more important by streaming video analysis during operations, and automated tool identification systems can greatly lower the expenses and increase the efficiency of this analysis. Research studies on the identification of surgical films are accessible, with an emphasis on surgical education, surgical instrument recognition, and organ recognition. Finding the resection spot is the most important problem. It is premature to conclude, though, that the existing AI models are not accurate enough for clinical use.

The major objective is to use the AI model’s navigational capabilities to help surgeons carry out surgical procedures efficiently and safely.

Effective use of dynamic picture databases that are available to the public and well-established software applications enhances the final result and diagnosis accuracy.

An effective AI technique for automated driving tasks, facial recognition, and image recognition is deep learning. It improves robots’ ability by identifying their environment and deciding what to do. AI is helpful for ultrasound, CT, and X-ray imaging. Deep learning has shown promise in robotics and medicine by predicting patient prognosis based on medical information. Artificial intelligence has changed as a result of deep learning breakthroughs, combining neural networks that mimic brain functions to enhance expressiveness and learning capabilities.

A well-known model might be the application of DL to the surgical domains in the medical and surgical industries. Pattern identification and feature extraction from complex data are made easier by the multi-layer architecture of connected neural networks in deep learning.

15. Future directions

Algorithms using artificial intelligence have not yet surpassed human performance and most likely will not in the future. Still, AI will be a useful tool for therapists in the future because of its capacity to correlate data and identify patterns that are invisible to human intellect.

References

  1. 1. Hashimoto DA, Rosman G, Rus D, Meireles OR. Artificial intelligence in surgery: Promises and perils. Annals of Surgery. 2018;268(1):70-76. DOI: 10.1097/SLA.0000000000002693
  2. 2. Egert M, Steward JE, Sundaram CP. Machine learning and artificial intelligence in surgical fields. Indian Journal of Surgical Oncology. 2020;11(4):573-577. DOI: 10.1007/s13193-020-01166-8. Epub 2020 Jul 15
  3. 3. Garrow CR, Kowalewski K-F, Li L, Wagner M, Schmidt MW, Engelhardt S, et al. Machine learning for surgical phase recognition: A systematic review. Annals of Surgery. 2021;273(4):684-693. DOI: 10.1097/SLA.0000000000004425
  4. 4. Malhotra K, Wong BNX, Lee S, Franco H, Singh C, Cabrera Silva LA, et al. Role of artificial intelligence in global surgery: A review of opportunities and challenges. Cureus. 2023;15(8):e43192. DOI: 10.7759/cureus.43192
  5. 5. Bini SA. Artificial intelligence, machine learning, deep learning, and cognitive computing: What do these terms mean and how will they impact health care? The Journal of Arthroplasty. 2018;33(8):2358-2361. DOI: 10.1016/j.arth.2018.02.067. Epub 2018 Feb 27
  6. 6. Beam AL, Kohane IS. Big data and machine learning in health care. Journal of the American Medical Association. 2018;319:1317-1318
  7. 7. Schwalbe N, Wahl B. Artificial intelligence and the future of global health. Lancet. 2020;395:1579-1586
  8. 8. Steimann F. On the use and usefulness of fuzzy sets in medical AI. Artificial Intelligence in Medicine. 2001;21:131-137
  9. 9. Hopfield JJ. Artificial neural networks. IEEE Circuits and Devices Magazine. 1988;4:3-10
  10. 10. Karnuta JM, Navarro SM, Haeberle HS, et al. Predicting inpatient payments prior to lower extremity arthroplasty using deep learning: Which model architecture is best? The Journal of Arthroplasty. 2019;34:2235-2241
  11. 11. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436-444
  12. 12. Litjens G, Sánchez CI, Timofeeva N, et al. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Scientific Reports. 2016;6:26286
  13. 13. Ward TM, Mascagni P, Ban Y, Rosman G, Padoy N, Meireles O, et al. Computer vision in surgery. Surgery. 2021;169:1253-1256
  14. 14. Mellia JA, Basta MN, Toyoda Y, et al. Natural language processing in surgery: A systematic review and meta-analysis. Annals of Surgery. 2021;273:900-908
  15. 15. Datta S, Li Y, Ruppert MM, et al. Reinforcement learning in surgery. Surgery. 2021;170:329-332
  16. 16. Shamim MS, Enam SA, Qidwai U. Fuzzy logic in neurosurgery: Predicting poor outcomes after lumbar disk surgery in 501 consecutive patients. Surgical Neurology. 2009;72:565-572
  17. 17. Pessaux P, Diana M, Soler L, Piardi T, Mutter D, Marescaux J. Towards cybernetic surgery: Robotic and augmented reality-assisted liver segmentectomy. Langenbeck's Archives of Surgery. 2015;400:381-385
  18. 18. Nilsson NJ. Artificial Intelligence: A New Synthesis. Burlington, MA: Morgan Kaufmann; 1998
  19. 19. Shinde PP, Shah S. A review of machine learning and deep learning applications. In: 2018 4th International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India. 2018. pp. 1-6. DOI: 10.1109/ICCUBEA.2018.8697857
  20. 20. Morris MX, Rajesh A, Asaad M, Hassan A, Saadoun R, Butler CE. Deep learning applications in surgery: Current uses and future directions. The American Surgeon. 2023;89(1):36-42. DOI: 10.1177/00031348221101490
  21. 21. Moglia A, Georgiou K, Georgiou E, Satava RM, Cuschieri A. A systematic review on artificial intelligence in robot-assisted surgery. International Journal of Surgery. 2021;95:106151
  22. 22. Madad Zadeh S, Francois T, Calvet L, Chauvet P, Canis M, Bartoli A, et al. SurgAI: Deep learning for computerized laparoscopic image understanding in gynaecology. Surgical Endoscopy. 2020;34:5377-5383
  23. 23. Islam M, Atputharuban DA, Ramesh R, Ren H. Real-time instrument segmentation in robotic surgery using auxiliary supervised deep adversarial learning. IEEE Robotics and Automation Letters. 2019;4(2):2188-2195. DOI: 10.1109/LRA.2019.2900854
  24. 24. Mascagni P, Vardazaryan A, Alapatt D, Urade T, Emre T, Fiorillo C, et al. Artificial intelligence for surgical safety: Automatic assessment of the critical view of safety in laparoscopic cholecystectomy using deep learning. Annals of Surgery. 2022;275:955-961
  25. 25. Padovan E, Marullo G, Tanzi L, Piazzolla P, Moos S, Porpiglia F, et al. A deep learning framework for real-time 3D model registration in robot-assisted laparoscopic surgery. The International Journal of Medical Robotics + Computer Assisted Surgery: MRCAS;18:e2387, 2022
  26. 26. Koo B, Robu MR, Allam M, Pfeiffer M, Thompson S, Gurusamy K, et al. Automatic, global registration in laparoscopic liver surgery. International Journal of Computer Assisted Radiology and Surgery. 2022;17:167-176
  27. 27. Namazi B, Sankaranarayanan G, Devarajan V. A contextual detector of surgical tools in laparoscopic videos using deep learning. Surgical Endoscopy. 2022;36(1):679-688. DOI: 10.1007/s00464-021-08336-x. Epub 2021 Feb 8
  28. 28. Yamazaki Y, Kanaji S, Kudo T, Takiguchi G, Urakawa N, Hasegawa H, et al. Quantitative comparison of surgical device usage in laparoscopic gastrectomy between surgeons' skill levels: An automated analysis using a neural network. Journal of Gastrointestinal Surgery. 2022;26:1006-1014
  29. 29. Aspart F, Bolmgren JL, Lavanchy JL, Beldi G, Woods MS, Padoy N, et al. ClipAssistNet: Bringing real-time safety feedback to operating rooms. International Journal of Computer Assisted Radiology and Surgery. 2022;17(1):5-13. DOI: 10.1007/s11548-021-02441-x. Epub 2021 Jul 23
  30. 30. Cheng K, You J, Wu S, Chen Z, Zhou Z, Guan J, et al. Artificial intelligence-based automated laparoscopic cholecystectomy surgical phase recognition and analysis. Surgical Endoscopy. 2022;36(5):3160-3168. DOI: 10.1007/s00464-021-08619-3. Epub 2021 Jul 6
  31. 31. Kitaguchi D, Takeshita N, Matsuzaki H, Hasegawa H, Igaki T, Oda T, et al. Deep learning-based automatic surgical step recognition in intraoperative videos for transanal total mesorectal excision. Surgical Endoscopy. 2022;36:1143-1151
  32. 32. Kitaguchi D, Takeshita N, Matsuzaki H, Takano H, Owada Y, Enomoto T, et al. Real-time automatic surgical phase recognition in laparoscopic sigmoidectomy using the convolutional neural network-based deep learning approach. Surgical Endoscopy. 2020;34:4924-4931
  33. 33. Hernández A, de Zulueta PR, Spagnolo E, Soguero C, Cristobal I, Pascual I, et al. Deep learning to measure the intensity of indocyanine green in endometriosis surgeries with intestinal resection. Journal of Personalized Medicine. 2022;12(6):982. DOI: 10.3390/jpm12060982
  34. 34. Twinanda AP, Yengera G, Mutter D, Marescaux J, Padoy N. RSDNet: Learning to predict remaining surgery duration from laparoscopic videos without manual annotations. IEEE Transactions on Medical Imaging. 2019;38(4):1069-1078. DOI: 10.1109/TMI.2018.2878055. Epub 2018 Oct 25
  35. 35. Igaki T, Kitaguchi D, Kojima S, Hasegawa H, Takeshita N, Mori K, et al. Artificial intelligence-based total mesorectal excision plane navigation in laparoscopic colorectal surgery. Diseases of the Colon and Rectum. 2022;65(5):e329-e333. DOI: 10.1097/DCR.0000000000002393
  36. 36. Kumazu Y, Kobayashi N, Kitamura N, Rayan E, Neculoiu P, Misumi T, et al. Automated segmentation by deep learning of loose connective tissue fibers to define safe dissection planes in robot-assisted gastrectomy. Scientific Reports. 2021;11(1):21198. DOI: 10.1038/s41598-021-00557-3
  37. 37. Moglia A, Morelli L, D'Ischia R, Fatucchi LM, Pucci V, Berchiolli R, et al. Ensemble deep learning for the prediction of proficiency at a virtual simulator for robot-assisted surgery. Surgical Endoscopy. 2022;36(9):6473-6479. DOI: 10.1007/s00464-021-08999-6. Epub 2022 Jan 12
  38. 38. Zheng Y, Leonard G, Zeh H, Fey AM. Frame-wise detection of surgeon stress levels during laparoscopic training using kinematic data. International Journal of Computer Assisted Radiology and Surgery. 2022;17(4):785-794. DOI: 10.1007/s11548-022-02568-5. Epub 2022 Feb 12
  39. 39. Hashimoto DA, Witkowski E, Gao L, Meireles O, Rosman G. Artificial intelligence in anesthesiology: Current techniques, clinical applications, and limitations. Anesthesiology. 2020;132(2):379-394. DOI: 10.1097/ALN.0000000000002960
  40. 40. Padmanabhan R, Meskin N, Haddad WM. Closed-loop control of anesthesia and mean arterial pressure using reinforcement learning. Biomedical Signal Processing and Control. 2015;22:54-64
  41. 41. Bowness J, El-Boghdadly K, Burckett-St LD. Artificial intelligence for image interpretation in ultrasound-guided regional anaesthesia. Anaesthesia. 2021;76(5):602-607. DOI: 10.1111/anae.15212. Epub 2020 Jul 29
  42. 42. Sites BD, Chan VW, Neal JM, et al. The American Society of Regional Anesthesia and Pain and the European Society of Regional Anaesthesia and Pain Therapy Joint Committee recommendations for education and training in ultrasoundguided regional anaesthesia. Regional Anesthesia and Pain Medicine. 2009;34:40-46
  43. 43. Bowness JS, Burckett-St Laurent D, Hernandez N, Keane PA, Lobo C, Margetts S, et al. Assistive artificial intelligence for ultrasound image interpretation in regional anaesthesia: An external validation study. British Journal of Anaesthesia. 2023;130(2):217-225. DOI: 10.1016/j.bja.2022.06.031. Epub 2022 Aug 18
  44. 44. Matava C, Pankiv E, Ahumada L, Weingarten B, Simpao A. Artificial intelligence, machine learning and the pediatric airway. Paediatric Anaesthesia. 2020;30(3):264-268. DOI: 10.1111/pan.13792. Epub 2020 Jan 2
  45. 45. Bellini V, Russo M, Domenichetti T, Panizzi M, Allai S, Bignami EG. Artificial intelligence in operating room management. Journal of Medical Systems. 2024;48(1):19. DOI: 10.1007/s10916-024-02038-2
  46. 46. Jiao Y, Xue B, Lu C, Avidan MS, Kannampallil T. Continuous real-time prediction of surgical case duration using a modular artificial neural network. British Journal of Anaesthesia. 2022;128(5):829-837. DOI: 10.1016/j.bja.2021.12.039. Epub 2022 Jan 26
  47. 47. Duranteau J, De Backer D, Donadello K, Shapiro NI, Hutchings SD, Rovas A, et al. The future of intensive care: The study of the microcirculation will help to guide our therapies. Critical Care. 2023;27(1):190. DOI: 10.1186/s13054-023-04474-x
  48. 48. Dumitru M, Berghi ON, Taciuc IA, Vrinceanu D, Manole F, Costache A. Could artificial intelligence prevent intraoperative anaphylaxis? Reference review and proof of concept. Medicina (Kaunas, Lithuania). 2022;58(11):1530. DOI: 10.3390/medicina58111530
  49. 49. Cascella M, Schiavo D, Cuomo A, Ottaiano A, Perri F, Patrone R, et al. Artificial intelligence for automatic pain assessment: Research methods and perspectives. Pain Research & Management. 2023;2023:6018736. DOI: 10.1155/2023/6018736
  50. 50. Maheshwari K, Ruetzler K, Saugel B. Perioperative intelligence: Applications of artificial intelligence in perioperative medicine. Journal of Clinical Monitoring and Computing. 2020;34(4):625-628. DOI: 10.1007/s10877-019-00379-9. Epub 2019 Aug 29
  51. 51. Fawcett WJ, Klein AA. Anaesthesia and peri-operative medicine over the next 25 years. Anaesthesia. 2021;76(10):1416-1420. DOI: 10.1111/anae.15552. Epub 2021 Aug 1

Written By

Dharmendra Kumar Pipal, Rajendra Kumar Pipal, Vibha Rani Pipal, Prakash Biswas, Vikram Vardhan, Seema Yadav and Himanshu Jatoliya

Submitted: 24 December 2023 Reviewed: 06 May 2024 Published: 17 June 2024