Open access peer-reviewed chapter - ONLINE FIRST

Applications of Artificial Intelligence in Military Medicine and Surgery

Written By

Nathaniel Meyer, Lauryn Ullrich, Zachary Goldsmith, Daniel Paul Verges, Thomas J. Papadimos and Stanislaw P. Stawicki

Submitted: 19 April 2024 Reviewed: 29 May 2024 Published: 16 July 2024

DOI: 10.5772/intechopen.115144

Artificial Intelligence in Medicine and Surgery - An Exploration of Current Trends, Potential Opportunities, and Evolving Threats - Volume 2 IntechOpen
Artificial Intelligence in Medicine and Surgery - An Exploration ... Edited by Stanislaw P. Stawicki

From the Edited Volume

Artificial Intelligence in Medicine and Surgery - An Exploration of Current Trends, Potential Opportunities, and Evolving Threats - Volume 2 [Working Title]

Dr. Stanislaw P. Stawicki

Chapter metrics overview

16 Chapter Downloads

View Full Metrics

Abstract

Artificial intelligence (AI) is rapidly being incorporated into many facets of medicine and surgery. This includes novel approaches utilizing machine learning (ML) in the management of injury, hemodynamic shock, and a range of military/battlefield/triage applications. In general, military-based medical systems are functionally similar to civilian equivalents domestically, especially when it comes to peacetime operations. Although there are also some similarities between military medicine and surgery during active engagements and high-volume penetrating trauma centers at surge capacity, the intensity and severity of injury are almost universally greater in the military-conflict setting. Given significant developments in the area of AI/ML in general, and in the prehospital setting in particular, benefits derived from existing AI/ML research and implementations should be translatable to the military setting (and vice versa). This chapter will address various niche medical and surgical needs applicable to both peacetime and active combat scenarios within the general sphere of military medicine and surgery. We will focus on various innovative and creative solutions and implementations utilizing a scoping literature review approach to evaluate the current state of AI/ML technology applications relevant to battlefield and battlefield-adjacent medical scenarios. We will also attempt to identify research gaps and possible avenues of moving forward.

Keywords

  • artificial intelligence
  • defense applications
  • military
  • triage
  • medic
  • burn care
  • traumatic brain injury
  • trauma
  • critical care

1. Introduction

Among emerging technologies, artificial intelligence (AI) and machine learning (ML) promise to be among the most transformational and revolutionary, perhaps equal in impact to the introduction of the Internet [1, 2, 3]. Artificial intelligence applications in medicine are increasingly recognized and have now been studied for more than a decade [4]. While many related tools are still under development, several have reached the pre-production or production stages, including prospective studies or randomized trials, across various fields, including endocrinology, radiology, pathology, gastroenterology, and ophthalmology [5, 6, 7, 8, 9]. A plethora of other proposed applications for AI in medicine are still in various stages of development or in-silico verification. Collectively, the domain of AI/ML applications is very complex and involves a highly diverse matrix of interrelated concepts (Figure 1).

Figure 1.

Word mosaic demonstrating both the diversity and the complexity of AI/ML-based applications in military medicine and surgery.

The medical branches of the United States (U.S.) Armed Forces have extensive healthcare systems that are in many ways similar to, but in certain ways very different from, their civilian counterparts [10, 11, 12, 13, 14]. The differences are more pronounced between the U.S. and overseas facilities. In the U.S., brick-and-mortar hospitals deliver comprehensive, all-encompassing care across many specialties for military service members and their families. At the same time, the system of care overseas is quite different. More specifically, overseas medical centers tend to be focused on different roles, with each location characterized by a complex system of logistics and treatment tactics that work together, at times in inhospitable environments [15]. In brief, this system has been designed to provide soldiers with a care continuum from battlefield to tertiary referral hospital, featuring highly trained specialists with a wide variety of skill sets and with wide-ranging resource availability.

While the scope of military medical services includes all specialties—and therefore many potential uses of AI in medicine—this paper aims to summarize emerging AI-driven medical technologies especially pertinent to battlefield and wartime use. Emerging AI technology applications for prehospital care, battlefield trauma management/triage, and surgery/critical care decision tools/aids will be discussed, including opportunities for future research.

Advertisement

2. Methods

The overall aim of this manuscript is to provide a scoping review of AI’s current and potential future medical applications in the area of battlefield and battlefield-adjacent scenarios. The search strategies employed are not intended to be exhaustive of all topics of active research or all available evidence within each category of research. Rather, sources were selected to assess applications with relevant technological advancements; to categorize AI-driven medical technologies into broader, more general categories; to highlight a few specific technologies with practical prototypes; and to identify gaps in current research that warrant further exploration.

Google Scholar™ was selected as the primary search engine due to its wide variety of academic sources and the inclusion of a wide variety of open-access military-related publications. During the preliminary literature sourcing phase, potential articles for inclusion that were published between 2012 and 2023 were searched using Boolean operators such as (AI OR “artificial intelligence” OR “machine learning” OR “neural network”) AND (Army OR Navy OR “Air Force” OR DoD OR “Department of Defense”) AND Medicine. Abstracts were screened, and subsequently, suitable complete manuscripts were reviewed manually to ensure relevance to the overall topic and the current review. More frequently cited articles were given preference based on the expected correlation between citations and the scientific impact/quality of published research. Articles published >15 years ago were included in the initial search, with the intent to review them alongside more recent articles, primarily to identify either sentinel publications in the field or areas of research that have either evolved significantly or stagnated in progress.

Numerous sources identified during the review of citations from initial sources were explored further and incorporated into the final draft. From the initial cycle of reviews, categories of AI implementations within the sphere of civilian-military development involving AI/ML and related technologies, specifically with relevance to the military sphere, were established and then further explored. Additional rounds of searches by category were conducted using similar Boolean operators as above. “Medicine” was replaced with, for example, “Burn” OR “Thermal injury”, to identify articles specific to the topic of burn care. Sources specific to each category of AI applications were reviewed to examine various technologies, both currently in use and under development.

Original articles describing clinical trials or internal validation studies were also identified, either by the use of the search terms listed above or from works cited sections of primary source papers. Abstracts were reviewed individually to evaluate relevance to the topic area. Articles published in military sources were given high priority, articles published in civilian sources about military technology were given moderate priority, and research without current military relevance of implementation(s) with some pertinence to future military adoption was included as well, although at lower overall priority.

Advertisement

3. Pre-hospital care

Field medics play a crucial role in the care of combat casualties [16]. The time between injury and arrival at a definitive role of care can vary widely depending on the combination of the setting, the evacuation chain, and various other factors specific to the overall scenario [17, 18]. Instances of prolonged field care may be further complicated by the available medical supply chain capacity and the scarcity of care-related items in the inherently austere battlefield environment [19]. The use of embedded AI within various tools for field medics is being explored and may prove useful in patient assessment, triage, and care interventions [20, 21].

Early detection of traumatic hemorrhage and hemorrhagic shock via AI tools has now been investigated for some time by both civilian and military groups [22, 23, 24, 25, 26, 27]. The use of new “derivative” vital signs, including various proxies for compensatory reserve, has been studied as potential inputs into machine learning (ML) algorithms used to predict hemorrhage [28]. Earlier work shows that acute physiological states can correlate with subtle and early changes in vital signs, glycemic status, and various other biomedical parameters [29, 30, 31, 32]. Additional variables, such as heart rate complexity and beat-to-beat heart rate variability, have also shown promise as potential ML algorithm inputs for the identification of hemorrhage [33]. These tools have been studied with the intent for eventual incorporation into en route field care, allowing personnel to modify fluid resuscitation (and other pertinent interventions) more appropriately until definitive care is reached.

Early identification of shock secondary to hemorrhage is associated with improved patient outcomes, especially in battlefield conditions [26, 34, 35, 36, 37, 38, 39, 40]. One ongoing project involving private-military collaboration is the Trauma Triage Treatment and Decision Support, also known as the 4TDS system, which uses a wearable sensor to measure vital signs that are then processed via an ML algorithm on a smartphone to identify impending shock. The algorithm is capable of predicting shock as early as 90 minutes before the onset of overt manifestations, with 75% overall accuracy [4142]. This and other similar tools could empower medics to more quickly and accurately initiate fluid resuscitation in the field and ideally prevent hypovolemic shock in suitable patients. Further and ongoing validation of any resultant clinical tools will be required to help optimize relevant clinical input parameters, workflows, and patient outcomes.

Another example of a specialized gadget developed with potential battlefield applicability is an AI-assisted pneumothorax/hemothorax detection system. The device consists of acoustic sensors placed on the chest, designed specifically to obtain data helpful with identification and classification of injury [43]. Testing in porcine models has demonstrated proof-of-concept viability of this paradigm. If further development is successful, this tool could be introduced to civilian practice, including aiding medics in identifying pneumothorax/hemothorax in the field and to allow for initiation of early needle decompression in the critically wounded [43].

Triage of injured patients has traditionally been a cumbersome step in emergency scenarios, especially those involving mass casualties [44, 45]. Deep neural network systems relying on wearable sensors have recently been validated in the fully automated triage of trauma patients [46]. Using hybrid data inputs, these systems are able to attain significantly greater accuracy when basic information about the level of consciousness is assessed by nursing checks in a hospital environment [47]. Although AI-assisted triage tools have not yet been translated into clinical use, they may eventually be helpful in streamlining care and more accurately assigning patient disposition during mass casualty events.

The TRAuma Care In a Rucksack (TRACIR) program seeks to create a closed-loop, AI-driven trauma care system that can be contained in a rucksack and operated by a field medic [48, 49]. The TRACIR system relies on wearable sensors on the neck and abdomen that feed data to a self-contained AI analyzer [49]. The AI system is intended to identify hemorrhage, pneumothorax, and hepatic/splenic lacerations. The system provides an opportunity for the medic to have access to real-time hemodynamic data that would guide fluid resuscitation, ventilation, the need for needle decompression, and potentially more [50]. If successful, the TRACIR and other similar programs could revolutionize how civilian and battlefield trauma care is delivered in the prehospital or forward surgical team setting.

Advertisement

4. Burn injuries

According to the Centers for Disease Control and Prevention (CDC) reporting, approximately 1.1 million burn injuries requiring medical attention occur annually in the United States alone [51]. These injuries are especially prevalent in the military setting, with as many as 10% of all combat casualties experiencing burns. Additionally, thermal injuries among combat casualties are associated with poorer prognosis when compared with demographically matched civilian counterparts [52].

A traditional challenge of burn care has been the heavy reliance on clinical expertise and the subjective nature of the burn severity/depth/area/extent assessment process [53]. Evaluations and decisions concerning treatment have required continued clinical observation, recognizing that as wounds evolve and their definitive extent is reached, so do key associated considerations [54]. The current process has the potential to lead to either treatment delays (shown to correlate negatively with patient outcome) or overtreatment (shown to be associated with acute kidney injury, volume overload, shock, unnecessary surgery, and compartment syndrome, among many other potential complications) [55].

The use of AI in the burn injury diagnosis/treatment process is being explored in efforts to help standardize associated clinical care, increase the accuracy of bedside assessment, and optimize the appropriateness of treatment [56, 57, 58]. Provider workflows could also be optimized when less time is required for tedious, repeated clinical observations. The current aspects of burn care with ML applications being actively researched include: [A] Estimating percent total body surface area burn (%TBSA burn) and using this number to guide fluid resuscitation; [B] Burn depth classification; [C] Evaluation for surgical candidacy/indications; and [D] Scarring predictions and wound treatment recommendations [55].

An ongoing challenge in estimating the total body surface area (TBSA) has been the use of two-dimensional paper models to represent patients whose body morphology varies greatly [59]. Clinical expertise is crucial to assessment under these conditions, and it is known that less experienced providers may struggle without appropriate guidance/redirection. Various AI-guided tools have been developed to estimate %TBSA based on ML processing of image data, with added ability to offer treatment recommendations [53, 60, 61, 62]. Several machine learning algorithms have been tested with varying degrees of success. Most suffer from over-prediction, and none have proven superiority over clinician judgment [55]. In addition to the prediction of %TBSA, these tools can be programmed to guide unskilled personnel to manage the often complicated fluid resuscitation required for treatment with improved efficacy [63].

Burn-depth classification is another relatively subjective area of burn care that researchers are attempting to optimize with the assistance of AI technology. Traditionally, burn depth is classified according to the appearance of the wound to the human eye and can be aided by radiologic findings. ML algorithms relying on inputs from standard digital photography, infrared imaging, ultrasound, and optical reflectance have been developed to aid burn depth assessment [64]. The use of inputs beyond the visual light spectrum could help further remove the limitations of the human eye in assessment, hopefully leading to even higher accuracy rates.

Determining surgical candidacy can be challenging when relying on traditional data, at times subject to various biases. To this end, a machine learning algorithm has also been developed to predict the need for surgery based on burn injury data at presentation, whereas this decision usually requires serial examination and observation of the wound as it matures. Researchers have developed internally validated ML algorithms using input data similar to the burn-depth algorithms that can predict the need for surgical intervention at admission [55, 65]. Early intervention may improve patient outcomes. Furthermore, predicting future need for procedural intervention may prove useful to medical logistics officers as they ensure proper staffing and equipment flow to field hospitals.

Scarring secondary to thermal injury is a particularly psychologically distressing consequence and experience for burn victims. In this context, the ability to predict scarring before wound maturation and intervening early is a challenge, and assessment relies on the highly subjective Vancouver Scar Scale (VSS) [55]. Algorithms predicting scarring (including its severity/extent) may have some implications in improving treatment and prognostic capabilities [66, 67]. Data inputs for these tools are similar to the data input for burn-depth classification tools but with the addition of tissue elasticity and tissue biopsy [68]. However, the inclusion of histological features could make this type of tool arguably more cumbersome than traditional scoring systems. Future avenues for research may include the use of bedside radiological/imaging data, as opposed to histological analysis, to streamline predictive outcomes [69, 70, 71].

Recently, the U.S. Army Institute of Surgical Research (USAISR) has developed a practical burn assessment/treatment tool known as the Burnman system (Sage diagram, LLC, Portland, Oregon, USA). The system has multiple variations, but each relies on a laptop with the Burnman ML algorithm-based server. Data are transmitted to the laptop from either a modified iPad Mini (Apple, Inc., Cupertino, California, USA) or an off-the-shelf camera. The algorithm can then assess injury severity and offer clinical decision support on fluid resuscitation and surgical management [52, 72]. The tool has been validated, and the USAISR currently considers the most recent iteration to perform at an intermediate (between low and high) fidelity level [52]. The eventual goals for this technology could be two-fold: [A] Miniaturized version available to field medics to aid in triage, assessment, and fluid resuscitation; and [B] Full-feature version that allows physicians or other hospital staff to more accurately assess the severity of the injury and optimize workflow in a definitive care setting.

Advertisement

5. Traumatic brain injury

Traumatic brain injury (TBI) is highly prevalent and highly morbid among warfighters. Estimates suggest that 15.2 – 22.8% of U.S. soldiers returning from deployment from conflicts in Iraq and Afghanistan were subject to at least one mild TBI (mTBI) [73]. These injuries often do not have obvious outward signs but are associated with cognitive/behavioral changes [74]. Diagnosis and management of mTBI can be quite challenging, and accurate identification and treatment are both necessary in order to guide activity restrictions to optimize care for civilians and soldiers alike [75, 76]. Additionally, prognosis for patients with more severe TBI is challenging due to the brain’s inherent complexity. The use of AI to process the large amount of raw data gathered when assessing TBI has shown significant development in recent years [7778].

Early identification of mTBI via AI-enabled systems falls mainly into two categories: input data from magnetic resonance imaging (MRI) versus electroencephalogram (EEG) [79, 80, 81, 82]. Advantages of the MRI include multiple scan types, consistent data without interference, high spatial resolution, and the ability to map complex relationships between regions in the brain [79, 80]. Functional MRI (fMRI), as well as other scanning modalities, however, may not be readily deployable in the field settings. Field hospitals typically lack MRI machines, and transfer to a definitive care center would be required (but may be precluded by various patient-related and logistical factors). EEGs, especially portable/variable types, may suffer from difficulty sorting through extraneous noise produced by motion or nearby electrical circuits [83]. Additionally, spatial resolution of EEG is typically poor. However, advancements in EEG hardware are being made that show progress toward an eventual wearable EEG monitor that allows for AI classification of mTBI by field medics at lower roles of care, especially when aided by novel approaches [84]. Additionally, solving the problem of noise filtration on EEGs is one proposed AI application that has yet to be sufficiently resolved [85].

Within the overall context of our current discussion, MRI may eventually prove useful across an increasing number of environments and roles of care when it comes to the diagnosis of mTBI. Currently, AI tools exist for diagnosis of TBI across varied MRI modalities including but not limited to traditional MRI, fMRI, and diffusion tensor imaging (DTI) [86]. Traditional MRI tools have largely fallen out of favor due to poor specificity but may retain usefulness as screening modalities due to their high sensitivity [87]. There may be higher overall accuracy in utilizing fMRI combined with multiple scan parameter adjustments to diagnose mTBI [88]. In addition, ML algorithms are able to process the fMRI data and map complex relationships to detect subtle disturbances. As of yet, the best validation figures for mTBI identification have been associated with the use of DTI. Diffusion tensor imaging is able to detect damage to axonal tracts, which is thought to be the primary mechanism of diffuse axonal injury (DAI). Some groups have reported mTBI identification with overall accuracy over 90% using these methods [89]. Whenever MRI is discussed, the high costs associated with this modality must be considered carefully. Screening every soldier returning from deployment may or may not be optimal in terms of resources, but continuous improvement of scan modalities and ML algorithms may eventually establish more pragmatic and resource-efficient approaches and pathways to mTBI diagnosis.

For the reasons discussed earlier, EEG hardware makes it more applicable toward point-of-care identification of TBI. A recent rodent study demonstrated that EEG data could be processed by AI contained on a small, Raspberry Pi (Raspberry Pi Foundation, Cambridge, United Kingdom) computer, making the EEG itself the limiting component of a wearable system [90]. Small studies have shown the potential for >90% accuracy in the diagnosis of concussions in athletes [91]. These studies, however, were conducted in a relatively well-controlled laboratory environment with traditional bench top machines. Introduction of movement, signal noise, and nearby electronics present on the battlefield could prove difficult to handle without effective noise filtering mechanisms in place [92]. Additionally, relevant algorithms have not been tested on datasets that incorporate blast injuries, the most common cause of TBI on the battlefield [93]. The bulk of currently available published research has been conducted on athletes and motor vehicle crash victims. Progress has been made, however, not only in the differentiation of TBI vs. non-TBI but also in subcategorization of severity accurately using EEG methods [94]. Currently, EEG plus AI identification of TBI in the field (or other intermediate roles of care) seems limited by its hardware [9596]. Eventually, this technology may feasibly be integrated into intermediate roles of care and could help prevent re-injury in a portable, closed-loop system [96, 97].

For more severe TBI requiring hospitalization, AI-aided diagnostic and management approaches have been explored in conjunction with various data inputs to predict the likelihood of survival and long-term functional outcomes. One method being employed is prediction based solely on radiologic findings [98, 99], an approach that stands to benefit tremendously from AI-based optimization. Using cost-effective and readily available computed tomographic (CT) scan imaging, one study achieved 95% sensitivity with a 60% specificity using a machine learning algorithm to predict mortality associated with injury [100]. This and similar tools could be especially useful in mass casualty triage where patients would require screening to identify who may benefit the most from medical/neurosurgical intervention for their injuries [45]. Other, more comprehensive tools have been developed that include a wider range of input variables. The use of electronic medical record data, injury characteristics, demographics, and lab data without radiologic studies has proven to be able to predict TBI-associated mortality with an overall accuracy of over 90% and sensitivity of over 80% when processed via AI/ML algorithms [101]. One active area of research includes which type of ML algorithm is superior in processing these data, with multiple groups making competing claims [102, 103, 104]. For example, a computer application has been developed that could prove particularly useful to field hospital ICUs. By monitoring intracranial pressure, mean arterial pressure, central perfusion pressure, and the Glasgow Coma Scale, an ML algorithm could predict 30-day mortality with 81-84% accuracy [105]. This tool is noteworthy because all required data inputs can be gathered in a relatively austere environment. The information can then be used to guide the allocation of limited resources in the most efficient way possible.

More traditional neuroimaging approaches in the setting of TBI also deserve a brief mention in the current context. More specifically, computed tomography (CT) and various ultrasound (US) imaging modalities, when combined with AI/ML capabilities, may offer a very potent system for the diagnosis and management of TBI [98, 106, 107]. The most probable evolutionary path of AI/ML-based approaches for diagnosis, treatment, and prediction in the setting of TBI will proceed along a path involving “all available data” aggregating tools that will be able to process in real-time and will include a degree of redundancy that will help facilitate competent decision-making across a broad range of “data availability” patterns.

Advertisement

6. Trauma surgery

Various AI-based tools are being developed to aid traumatologists in optimizing the care of their patients. Improving predictive outcomes, analyzing existing practices, and perhaps even physician education/training are all among applications being explored [108, 109].

One major avenue of active research can be described broadly as “predictive outcomes” research. An example especially relevant to military applications is the “mangled extremity” scenario and, more specifically, a decision tool for salvage vs. amputation [110]. Mangled limbs are common in blast injuries as well as in certain battlefield penetrating traumas. The major decision to salvage or amputate is multifactorial, but one of the most important factors is the likelihood of successful revascularization and functional limb restoration. One research group has used ML algorithms with simple data inputs to predict revascularization success more accurately than the Mangled Extremity Severity Score (MESS) [111]. When dealing with multiple simultaneous casualties after, for example, a massive blast, such a predictive framework could streamline care and allow for more optimal utilization of surgical resources. Another tool of interest to surgeons is the use of AI in calculating transfusion demands of trauma patients, which are often complex and involve multi-variable consideration. Such decision tree-based methods for predicting transfusion requirement have shown non-inferiority to traditional methods [112]. While the above are just two examples of AI-driven implementations in traumatology, both illustrate the longer-term goal of achieving the best possible outcomes while optimizing resource utilization, especially in resource-limited environments.

Artificial intelligence is also being evaluated in the capacity of surgical guideline design and optimization. In one study of severely injured trauma patients, only 43 – 55% of official recommendations were properly followed [113]. Nonadherence to recommendations is associated with poorer outcomes, and the leading cause of nonadherence in emergent settings is lack of sufficient time. One direction in which AI tools are being developed to help address this problem is by optimizing official recommendations into smaller “bundles of best practices.” The French Trauma Registry group, for example, designed an ML algorithm based on a least absolute shrinkage and selector operator (LASSO) [114]. The goal was to find an optimal combination of guidelines that would help highlight clinical recommendations with the largest effect while also maximizing physician compliance. The resultant “care bundle” proved effective upon both internal and external validation, showing a potential procedural template for future design and reworking of guidelines.

Another ambitious project is currently underway, with the goal of substituting AI-enabled devices for faculty mentorship in the emergency setting [109]. The aim is to develop an “AI Medic,” or, more accurately, an AI trauma surgery instructor, to train surgical providers in environments where either dedicated faculty are unavailable or organized training is not feasible. This tool could naturally find its home in a field hospital, where medical logistics and optimal staffing cannot be guaranteed during wartime. The training paradigm utilizes deep learning models to predict medical instructions based on images of surgical procedures. Thus far, the system has been programmed with information gathered during 290 surgical procedures. It has shown objective merit in BLEU (Bi-Lingual Evaluation Understudy) scoring, and independent experts have determined that the instructions provided by the program were appropriately related to the surgical photo inputs [109]. This technology is still in its infancy, but conceptually, it shows great potential for future growth and development.

Advertisement

7. Critical care

Military definitive care hospitals in the U.S. and abroad provide critical care services that are comparable to civilian counterparts. Critical care provided at field hospitals also approximates the highest civilian standards, but wartime limitations (including both the environment, logistics, and available resources) may be unpredictable [115]. In a war with a near-peer nation, supply lines could be compromised, prohibiting medical equipment and/or personnel movement. Though it has been historically uncommon for field hospitals to operate at reduced capacity, optimizing existing critical care capabilities and resources would promote readiness and adaptability in case of disaster. To this end, AI-based tools are being developed to improve critical care by automating real-time clinical outcome prediction, estimating patient needs over time, and improving clinical care by identifying key conditions (e.g., hemorrhagic shock, sepsis, etc.) before they clinically manifest in an overt fashion [116].

In the intensive care unit (ICU) environment, real-time outcome/morbidity/mortality prediction would allow for early identification of high-acuity patients and aid physicians in assigning limited resource distribution. Existing tools integrated into the EPIC (EPIC Systems, Verona, Wisconsin, USA) system, including the EPIC deterioration index (EDI), use ML to predict clinical events and mortality continuously, based on data already integrated into the electronic medical record (EMR). This tool has shown increasing levels of accuracy in trauma patients [117].

Prediction of patient needs in the ICU is also of great interest to medical logistics specialists, even during relative peacetime, to improve the overall supply chain efficiency and reliability. Other tools have been developed using EMR data to accurately predict which patients will have extended lengths of stay and which will require prolonged mechanical ventilation [118, 119]. By integrating clinical care with medical logistics, patient-centric interventions could be proactively implemented and resources appropriately allocated wherever necessary, based on an improved prediction of an ICU’s needs over time.

In predicting outcomes for the direct medical care of individual ICU patients, several important advancements have been made. As discussed earlier, burns are among the most common battlefield injuries. Burn care is labor intensive from both a surgical and critical care perspective. In addition to high mortality, burns also carry a significant morbidity burden [54]. In this context, ML algorithms have been developed that can predict acute kidney injury (AKI) based largely on EMR data. On internal validation, these algorithms currently have not matched physician judgment in terms of performance, but it is still early in the overall development process [120]. By identifying high-risk individuals, laboratory parameters, other diagnostics, as well as physician- and nursing-specific inputs, predictive analytics are likely to improve incrementally over time [121, 122]. Systems for sepsis identification have been well-studied, and warning systems have been developed and integrated into the EMRs [123, 124]. These systems also allow for early notification of physicians caring for high-risk patients and facilitate active adjustments of various clinical interventions, especially in environments with limited resources.

Advertisement

8. Transfusion medicine

Transfusion medicine is a potentially attractive area of AI/ML implementation. Here, opportunities exist for enhanced blood transfusion chain monitoring and minimization of human errors, as well as the alleviation of blood bank labor shortages. A focus on accuracy, traceability, and reliability can be achieved with AI via decreasing manual stages/touches, all of which are opportunities for the introduction of human errors [125, 126]. This can be achieved via integration of radio-frequency identification devices (RFIDs) and via the internet of things (IoT)—objects embedded with sensors, software, and technologies to allow them to connect to and share data [127128]. In terms of transfusion logistics, AI can be used to optimize identification and mobilization of donors, management of blood product stock, decision-making of when to and when not to transfuse, and prediction of surgical blood loss, thus decreasing unnecessary type and cross procedures. The overarching goals include better ability to forecast blood demand, reduction in waste, and fewer blood management errors [125, 126].

The literature regarding AI/ML in transfusion medicine is growing. Within this grown body of knowledge, about 61% of studies pertain to the surgical setting, and approximately 24% pertain to the trauma setting (both pertinent to military applications) [129]. A significant majority of published studies are from the United States and China. The ability to predict transfusion requirements prior to arrival at definitive care location aids in military triage decisions [130]. In addition, ML has been used to predict and detect acute transfusion reactions and adverse transfusion events. Blood group and antibody detection algorithms may speed the time to transfusion and may help with the management of blood pool shortages. Finally, ChatGPT-like capabilities can be used interactively in real-time using clinical and EMR data to further streamline these efforts [129].

Most decisions regarding transfusion of the trauma patient rest on professional experience of the provider, combined with their clinical acumen. Models utilizing AI and clinical decision support algorithms incorporating hemoglobin/hematocrit, blood gas parameters, vital signs, coagulation studies, and basic metabolic panel findings, along with trauma type (blunt versus open), and trauma location may decrease complications caused by delayed transfusion, insufficient transfusion, or excessive transfusion [112, 131]. These non-invasive parameters can be quickly obtained in the battlefield setting, followed by AI interfaces processing the data and recommending “for or against” transfusion of blood products. Trauma location and shock index scores are among variables that are the most predictive of transfusion requirement, and appropriately integrated into well-designed models and algorithms, they may be able to outperform trauma surgeon decisions made in isolation. For example, without AI use, 69% of clinician-predicted surgeries thought to require blood transfusion did not actually require blood product administration. Consequently, the use of AI algorithms may save time and resources used for unnecessary cross-matching and will help to manage limited supplies of blood products [112, 131]. Self-learning AI using ML algorithms can theoretically build upon existing models and improve and strengthen predictions and recommendations. While overall promising, it appears that AI/ML is unlikely to replace workers in transfusion medicine; however, it will likely support them and improve their efficiency going forward [126].

Blood transfusions save lives, but their use can also create unintended consequences and potential complications [132, 133]. The use of “big data” can help improve efficiencies in the use of blood products through ML, a subcomponent of AI, and can considerably improve the applications of transfusion therapies [129]. Over 18 million blood products are transfused in the US annually [134]. As the available body of literature continues to grow, many challenges and limitations persist. Maynard et al. point out that “data quality and access, adherence to (and existence of) appropriate reporting frameworks, and generalizability of findings” continue to be problematic [129].

The relevance of AI will continue to grow in the area of transfusion medicine. A particular topic of specific concern is red blood cell unit quality control [135]. While advances in transfusion medicine have made the use of blood products relatively safe, the question arises as to the quality of the red blood cells (and other blood products) transfused [135]. Red blood cell storage lesions are well described [136, 137], and the longer a unit is stored, the chances of a complication increase [138, 139]. Estimation of red blood cell quality is difficult, but ML/AI is capable of determining the quality of red blood cells rapidly and effectively through evaluation of morphology and quality measurements within microcapillaries [135]. Such applications have not yet become commonplace in routine clinical practice but offer great promise to inform and enhance our current practices.

While work is being done on the quality of red blood cells for transfusion, there are also research efforts looking at predicting indications and triggers for transfusion, such as the need for massive transfusions in trauma [140], prediction of the need for transfusion in elderly patients undergoing hip arthroplasty [141] and/or liver transplant surgery [142], as well as the general prediction of preoperative red blood cell demand [131].

The reliability of “big data” in the context of machine learning in regard to blood and blood product transfusion is extremely important. This reliability hinges on the fact that a wider introduction and application of stochastic dynamic programming and AI will play a major role through the creation of reliable quality data resource (e.g., “big data”). In turn, the use of these data through a network of algorithms, deep learning, and ML processes that require continued research, upgrading, and application will benefit not only high-income countries but also all participating stakeholders globally [143, 144].

From the military perspective, it would be of great utility to create a comprehensive “big data” system, incorporating clinical information not only from active duty personnel but also of available retirees, family members, and veterans (of course, all within an appropriate, informed consent framework). Such large datasets can be coupled with an AI-based clinical quality control and guideline system, able to respond quickly and efficiently, helping to determine blood product burden and the optimal allocation and use of available blood products. This would be especially prescient if there was an ability for the deployment of such a system into forward surgical settings.

Advertisement

9. Applications in education and simulation

As AI continues to develop and its uses expand in the clinical setting, it is also increasingly utilized in surgical education and simulation. Surgical training has transitioned from a traditional, time-based apprenticeship model to a competency-based model [145]. With this shift, obtaining the necessary surgical skills and competencies has become challenging at times. Many believe AI-based education and simulation may help addresses these challenges, inclusive of critical skill training in the military settings [146].

One of the reasons AI-based platforms are becoming increasingly popular within the surgical community is their effectiveness in providing automated, personalized feedback and assessment of hands-on skills [145]. AI feedback allows trainees the opportunity to practice surgical skills without the physical presence of an attending surgeon to provide real-time feedback. In addition, AI is being used to analyze various important performance metrics, including hand motion, eye tracking, and time/errors, and it provides rating scales to users on skills performed [145].

Though AI appears promising in addressing gaps across important surgical skill sets, its use has identifiable barriers. Within the literature, there is a lack of validated assessment regarding the performance of AI-based surgical simulators, which may affect the ability to directly translate the simulated training into actual operating room experience [147]. Another barrier includes the vast amount of surgical data required to create effective ML algorithms. This, combined with high cost and limited availability of surgical simulators, further affects implementation and adoption. Future efforts incorporating AI into surgical simulation include conducting dedicated trials to validate surgical simulation assessment, to determine the efficacy of AI algorithm training, to automate surgical simulation while providing accurate and meaningful feedback, to ensure appropriate cost-containment so that the technology remains cost-effective, and to hardwire the programmatic adherence to local and national guidelines [147]. All of the above items present formidable regulatory and standardization challenges surrounding the routine use of surgical AI-based simulators in both civilian and military settings [147].

Medical simulation in the military is extremely important. The U.S. military continues to operate in austere conditions, where evaluation and treatment times can be prolonged, and the next level of care is often remote. Warfighter survival during recent conflicts in the Middle East was notably improved because the injury-to-treatment time could be reduced to less than 60 minutes [148, 149, 150]. However, model-based predictions for future conflicts infer that the airspace in combat zones will be heavily contested and that providers in the field may be on their own, anywhere between days to weeks at a time [151]. Those providing the bulk of healthcare to injured front-line troops will be army medics and navy corpsmen. The combination of AI, augmented reality training, and integration of telemedicine (with a capacity for audiovisual assistance) has the potential to further decrease field morbidity and mortality. In this context, the military has created an offline system entitled Trauma TeleHelper for Operational Medical Procedure Support and Offline Network (Trauma THOMPSON) to assist medics and corpsmen in the field [152]. Additional AI-based medical simulation programs of value for individual providers include simulation in tourniquet skills [153], examination and aspiration of joints [154], and the mathematical modeling of human cardiovascular and respiratory responses in the battle space [155]. Such AI-based simulations for the provision of limb and life-saving care during military action have also been provided to military medical students and military nurses [156, 157].

Modern military is characterized by high mobility and presence across versatile combat theaters. Consequently, infrastructure must be created that reflects this contemporary reality and is “AI-ready” to address both local/peacetime and deployment-time needs. AI and its subset of ML could provide a much-needed closure of gaps in our collective knowledge and experience when it comes to trauma [158]. However, dissenting opinions are evident when it comes to comparing virtual (VR), augmented (AR), and mixed reality (MR) when it comes to trauma surgery training. A systematic literature review conducted by a multi-institutional consortium concluded that, at this time, there was inadequate evidence that VR, MR, AR, or haptic interfaces can facilitate effective training for trauma surgery or replace cadavers. Because of limited model testing in real-time situations/settings, potentially deficient study, and technology design, combined with the risk of reporting bias, no current well-designed studies of computer-supported technologies have shown benefit for trauma and emergency surgery, inclusive of any appreciable effect on patient outcomes. Larger, more rigorously designed studies and empiric testing/evaluations by experienced surgeons are required for AI/ML-based training models to become more effective and clinically impactful [159]. There is little doubt that AI-based approaches in education and simulation are applicable to general medical training, procedural training, and general situation-based clinical scenarios. That said, the future of AI/ML-based clinical systems in the military and civilian trauma settings, in the near term, will continue to rely on hybrid models of training.

Advertisement

10. Important limitations of artificial intelligence and machine learning

There are several important limitations to AI/ML-based technologies. The first major limitation is the lack of clear ethical principles associated with deploying these very powerful and all-encompassing tools. In fact, ethical considerations pervade all of AI’s limitations [160, 161]. For potential military applications, important ethical tenets and guidelines have been proposed for the use of AI in future battle spaces [162]. Of concern, deadly/lethal AI-based military systems are likely to become commonplace on the battlefield [163]. The five primary principles for AI encompass the need for it to be responsible, equitable, traceable, reliable, and governable [164]. While AI can have a substantial impact on the battlefield, it can also have a great impact on healthcare. In the latter context, AI is bound to affect patient records and provide assistance to physicians and other caregivers and, in its more extreme manifestations, may even eliminate entire clinical departments [165].

Generative AI refers to the creation of new content such as videos, audio, images, and texts, and it is expanding at a staggering pace. Oniani et al. have proposed the acronym “GREAT PLEA” for ethical AI principles in healthcare based on the military model, and this includes governability, reliability, equity, accountability, traceability, lawfulness, empathy, and autonomy. This is crucially important because it requires guidelines that encompass the risk of misinformation, ramification of bias, and difficulty using general evaluation metrics [164].

In the classical clinical scientific method, a dataset is studied in order to elucidate a trend, a previously unappreciated effect, or other unexpected findings, thereby leading to a hypothesis. This hypothesis is then tested against another dataset to see if it holds true. However, and of great importance, in AI-based paradigms, hypotheses can be developed and changed with each new datum [166]. This is considerably unique and different from the traditional scientific method. The root of this dilemma rests in AI having a gigantic “black box” that does not allow us to peer into how the hypothesis that is being presented to us was actually formulated by the AI-based system. In effect, we do have an answer, but this answer is based on ML, making independent verification difficult. Systems operating on AI/ML are based on so-called “training sets,” and there are real limitations to such “training sets.” Lambert et al. used the example of trying to get to the moon on the assumption the earth is flat [166]. The ethical quandary here in the use of ML is that we must “trust,” but we ultimately do not (or cannot) verify.

It is obvious that AI and humans are not on equal footing. This is because humans are required to trust technology [167]. Current deep learning technologies lack explanation and transparency [168]. So, trust is very important. For the military, this trust is a necessity and a foregone conclusion. In healthcare, there must be even more built-in caution. As mentioned above, the “GREAT PLEA” plays an important role in the unintended effects of AI on vulnerable populations [169]. Therefore, AI is more than just a technology. It relies on trust and needs to inherently comply with human values [167]. AI systems must be able to eliminate biases. It is of interest that consumers, especially those that are young, educated, and familiar with new technologies, trust AI more than they trust governments or large corporations [167]. This may also serve as a multi-layered, complex cautionary tale, where governments and large corporations may turn to AI to help build trust, yet any unethical use of such AI-based approach may lead to an ever-greater deception and abuse of public trust.

On the other hand, there are those who believe “AI ethics” as a concept is of little to no use [170]. Munn claims that as AI technology develops and grows, it will attain and occupy a tremendous (e.g., beyond human control) space within which to damage people’s lives. Over 50 ethical frameworks have been issued by governments and are now being used in over 80 countries [171]. Munn argues that AI principles are toothless and are created so that governments and corporations can go on with “business as usual,” as evidenced by the resistance to updating legislation and regulations [170]. As far as “high tech” goes, unethical AI is a byproduct of an unethical industry [172]. Munn continues on, “AI ethics may not be able to mitigate the racial, social, and environmental damages of AI technology in any meaningful sense [170].” Regardless of the argued shortcomings of AI ethics, AI and ML are here to stay. The military will require these tools to attain positive outcomes in the battle space. It will also undoubtedly deploy AI/ML within its medical systems, including both forward positions and the rear of conflict zones. And in healthcare, generally, AI/ML will provide a leap forward—which will require caution. For most stakeholders, this leap forward may prove to be simply irresistible [161].

The heterogeneity of methods and lack of reporting frameworks in many AI/ML models make validation and synthesis and generalizability of their results difficult [129]. Corresponding limitations to AI/ML algorithms may stem from the quality of the data used for their construction. As these models are “trained” on established databases and the body of medical and military knowledge, they may come to reinforce and propagate biases present in the source data used in their generation. Varied data sources from representative populations may serve to lessen these limitations. Regarding AI use in military operations, inclusion of data from different battlefield environs and varied mechanisms of trauma and injury may also serve to decrease limitations and biases [112].

11. Synthesis and conclusion

It is important to recognize that military medicine and surgery encompasses all facets of patient care. However, field medical care is an integral part of what makes military medicine and surgery unique and must not be overlooked in the context of medical practice and discovery alike. Military medicine and surgery in areas of conflict differs from civilian medicine and surgery in that there is added emphasis on pre-hospital care, higher trauma volumes with different mechanisms of injury, and the potential to operate in austere conditions without necessary staff/supplies. One avenue to address these unique problems is using AI-driven tools and technologies to aid providers with limited time, resources, and, in some instances, expertise.

AI-driven advances discussed in this chapter include innovations in prehospital care, various assessment and treatment tools for common battlefield injuries, and clinical tools developed for use by physicians to optimize medical, surgical, and intensive care. These tools, like the majority of AI-driven medical systems, are currently in various stages of development and in-silico verification [5]. Prospective testing is the next step for these tools to reach clinical or field practice. As always, ethical implementation of new technological advances will be key to the beneficial and sustainable and, hopefully, symbiotic growth of AI-enabled, human-directed health systems.

The authors would like to emphasize that this manuscript does not constitute a comprehensive list of all potential military medicine and surgery-related applications amenable to AI-based enhancements, but it should serve as a summary of key developments pertaining to AI in combat/combat-adjacent medicine. The vastness of the overall field of AI/ML applications in healthcare may be intimidating, but we must – at the societal level – address these important issues before it is done “for us” or “on our behalf” by factors we do not control or influence.

References

  1. 1. Makridakis S. The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms. Futures. 2017;90:46-60
  2. 2. Elliott A. The Culture of AI: Everyday Life and the Digital Revolution. New York, NY, USA: Routledge; 2019
  3. 3. Dwivedi YK et al. Artificial intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management. 2021;57:101994
  4. 4. Xing L, Giger ML, Min JK. Artificial Intelligence in Medicine: Technical Basis and Clinical Applications. Cambridge, Massachusetts: Academic Press; 2020
  5. 5. Rajpurkar P et al. AI in health and medicine. Nature Medicine. 2022;28(1):31-38
  6. 6. Hasanzad M et al. Artificial intelligence perspective in the future of endocrine diseases. Journal of Diabetes & Metabolic Disorders. 2022;21(1):971-978
  7. 7. Ting DSW et al. Artificial intelligence and deep learning in ophthalmology. British Journal of Ophthalmology. 2019;103(2):167-175
  8. 8. Gampala S, Vankeshwaram V, Gadula SSP. Is artificial intelligence the new friend for radiologists? A review article. Cureus. 2020;12(10):e11137
  9. 9. Le Berre C et al. Application of artificial intelligence to gastroenterology and hepatology. Gastroenterology. 2020;158(1):76-94 e2
  10. 10. Redmond S et al. A brief introduction to the military workplace culture. Work. 2015;50(1):9-20
  11. 11. Zogas A. US Military Veterans’ Difficult Transitions Back to Civilian Life and the VA’s Response. Providence, RI: Brown University; 2017
  12. 12. Jackson JL et al. Patients, diagnoses, and procedures in a military internal medicine clinic: Comparison with civilian practices. Military Medicine. 1999;164(3):194-197
  13. 13. Pierre-Louis BJ, Moore AD, Hamilton JB. The military health care system may have the potential to prevent health care disparities. Journal of Racial and Ethnic Health Disparities. 2015;2:280-289
  14. 14. Bonica MJ, Bewley LW. A comparison of mentorship attitudes and attributes between civilian and army healthcare leaders. Military Medicine. 2019;184(5-6):e255-e262
  15. 15. Knight RM, Moore CH, Silverman MB. Time to update army medical doctrine. Military Medicine. 2020;185(9-10):e1343-e1346
  16. 16. Hoyt T, Hein C. Combat and operational stress control in the prolonged field care environment. Military Review. 2021;101(5):54-64
  17. 17. Clarke JE, Davis PR. Medical evacuation and triage of combat casualties in Helmand Province, Afghanistan: October 2010–April 2011. Military Medicine. 2012;177(11):1261-1266
  18. 18. Gerhardt RT et al. Fundamentals of Combat Casualty Care. Combat Casualty Care: Lessons Learned from OEF and OIF. Vol. 85. Washington, DC: United States Department of Defense; 2012. p. 120
  19. 19. Haut ER, Mann NC, Kotwal RS. Military trauma care’s learning health system: The importance of data driven decision making. In: Comm mil Trauma Care’s Learn Heal Syst its Transl to Civ Sect. Washington D.C., US: National Academies of Sciences, Engineering, and Medicine; 2016
  20. 20. Ramos G, Schneller ES. Smoothing it out: Military health care supply chain in transition. Hospital Topics. 2022;100(3):132-139
  21. 21. Tirkolaee EB et al. Application of machine learning in supply chain management: A comprehensive overview of the main areas. Mathematical Problems in Engineering. 2021;2021:1-14
  22. 22. Di Carlo S et al. Prehospital hemorrhage assessment criteria: A concise review. Journal of Trauma Nursing. 2021;28(5):332-338
  23. 23. Chen L et al. Diagnosis of hemorrhage in a prehospital trauma population using linear and nonlinear multiparameter analysis of vital signs. In: 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Lyon, France: IEEE; 2007
  24. 24. Jalo H et al. Early identification and characterisation of stroke to support prehospital decision-making using artificial intelligence: A scoping review protocol. BMJ Open. 2023;13(5):e069660
  25. 25. Galvagno SM Jr et al. Prehospital point of care testing for the early detection of shock and prediction of lifesaving interventions. Shock. 2020;54(6):710-716
  26. 26. Convertino VA et al. Use of advanced machine-learning techniques for noninvasive monitoring of hemorrhage. Journal of Trauma and Acute Care Surgery. 2011;71(1):S25-S32
  27. 27. Nederpelt CJ et al. Development of a field artificial intelligence triage tool: Confidence in the prediction of shock, transfusion, and definitive surgical therapy in patients with truncal gunshot wounds. Journal of Trauma and Acute Care Surgery. 2021;90(6):1054-1060
  28. 28. Gupta JF, Telfer BA, Convertino VA. Feature importance analysis for compensatory reserve to predict Hemorrhagic shock. In: 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). Glasgow, Scotland: IEEE; 2022
  29. 29. Stawicki SP et al. Introducing the glucogram–description of a novel technique to quantify clinical significance of acute hyperglycemic events. International Journal of Academic Medicine. 2017;3(3):142
  30. 30. Pappada SM et al. Evaluation of a model for glycemic prediction in critically ill surgical patients. PLoS One. 2013;8(7):e69475
  31. 31. Stawicki SP. Application of financial analysis techniques to vital sign data: A novel method of trend interpretation in the intensive care unit. OPUS. 2007;12:14-16
  32. 32. Stawicki SP. Application of financial analysis techniques to clinical laboratory data: A novel method of trend interpretation in the intensive care unit. OPUS. 2007;12:1-4
  33. 33. Liu NT, Salinas J. Machine learning and new vital signs monitoring in civilian en route care: A systematic review of the literature and future implications for the military. Journal of Trauma and Acute Care Surgery. 2016;81(5):S111-S115
  34. 34. Tran A et al. Early identification of patients requiring massive transfusion, embolization or hemostatic surgery for traumatic hemorrhage: A systematic review and meta-analysis. Journal of Trauma and Acute Care Surgery. 2018;84(3):505-516
  35. 35. Jin X et al. AI algorithm for personalized resource allocation and treatment of hemorrhage casualties. Frontiers in Physiology. 2024;15:1327948
  36. 36. Eastridge BJ et al. Death on the battlefield (2001-2011): Implications for the future of combat casualty care. Journal of Trauma and Acute Care Surgery. 2012;73(6):S431-S437
  37. 37. Kisat M et al. Epidemiology and outcomes of non-compressible torso hemorrhage. Journal of Surgical Research. 2013;184(1):414-421
  38. 38. Davis JS et al. An analysis of prehospital deaths: Who can we save? Journal of Trauma and Acute Care Surgery. 2014;77(2):213-218
  39. 39. Chang R et al. Advances in the understanding of trauma-induced coagulopathy. Blood, The Journal of the American Society of Hematology. 2016;128(8):1043-1049
  40. 40. Gurney JM, Spinella PC. Blood transfusion management in the severely bleeding military patient. Current Opinion in Anesthesiology. 2018;31(2):207-214
  41. 41. Nemeth C et al. Decision support for tactical combat casualty care using machine learning to detect shock. Military Medicine. 2021;186(Suppl. 1):273-280
  42. 42. Ghazali SA et al. The impact of adult trauma triage training on decision-making skills and accuracy of triage decision at emergency departments in Malaysia: A randomized control trial. International Emergency Nursing. 2020;51:100889
  43. 43. Sommer A et al. Hemopneumothorax detection through the process of artificial evolution-a feasibility study. Military Medical Research. 2021;8(1):1-9
  44. 44. Wydo S et al. Portable ultrasound in disaster triage: A focused review. European Journal of Trauma and Emergency Surgery. 2016;42:151-159
  45. 45. Stawicki SP et al. Portable ultrasonography in mass casualty incidents: The CAVEAT examination. World Journal of Orthopedics. 2010;1(1):10
  46. 46. Kim D et al. Automated remote decision-making algorithm as a primary triage system using machine learning techniques. Physiological Measurement. 2021;42(2):025006
  47. 47. Zhang Z et al. Clinical application of artificial intelligence in emergency and critical care medicine. Frontiers in Medicine. 2022;9:1-2
  48. 48. Berkow J, Poropatich R. TRA care in a rucksack (TRACIR), a disruptive technology concept. Small Wars Journal. 2016;19(1):11
  49. 49. Rausch M et al. Biosensors supporting healthcare in missions—Expert consensus on the status of implementation in the military and future tasks. Health Promotion & Physical Activity. 2022;20(3):29-35
  50. 50. Poropatich RK, Pinsky MR. Robotics Enabled Autonomous and Closed Loop Trauma Care in a Rucksack. Healthcare Transformation; Feb 2020. DOI: 10.1089/heat.2019.0007
  51. 51. CDC. Burns. Washington, DC: CDC Injury Prevention; 2002 [Accessed: January 19, 2023]
  52. 52. Serio-Melvin ML. The Burn Medical Assistant: Developing Machine Learning Algorithms to Aid in the Estimation of Burn Wound Size (BURNMAN). San Antonio United States: US Army Institute of Surgical Research (USAISR); 2020
  53. 53. Liu NT, Salinas J. Machine learning in burn care and research: A systematic review of the literature. Burns. 2015;41(8):1636-1641
  54. 54. Sojka J, Krakowski AC, Stawicki SP. Burn Shock and Resuscitation: Many Priorities, One Goal, in Clinical Management of Shock-the Science and Art of Physiological Restoration. London, UK: IntechOpen; 2019. pp. 1-33
  55. 55. Huang S et al. A systematic review of machine learning and automation in burn wound evaluation: A promising but developing frontier. Burns. 2021;47(8):1691-1704
  56. 56. Zhang R et al. A survey of wound image analysis using deep learning: Classification, detection, and segmentation. IEEE Access. 2022;10:79502-79515
  57. 57. Thatcher JE et al. Clinical investigation of a rapid non-invasive multispectral imaging device utilizing an artificial intelligence algorithm for improved burn assessment. Journal of Burn Care & Research. 1 Jul 2023;44(4):969-981
  58. 58. Wang Y et al. Real-time burn depth assessment using artificial networks: A large-scale, multicentre study. Burns. 2020;46(8):1829-1838
  59. 59. Giretzlehner M et al. The determination of total burn surface area: How much difference? Burns. 2013;39(6):1107-1113
  60. 60. Chang CW et al. Application of multiple deep learning models for automatic burn wound assessment. Burns. 2023;49(5):1039-1051
  61. 61. Colson CD et al. 56 evaluation of a smartphone application as a method for calculating total body surface area burned. Journal of Burn Care & Research. 2022;43(Suppl. 1):S38-S39
  62. 62. Huang S et al. Machine learning and automation in burn care: A systematic review. Journal of Burn Care & Research. 2021;42(Suppl. 1):S193-S193
  63. 63. Barnes J et al. The Mersey burns app: Evolving a model of validation. Emergency Medicine Journal. 2015;32(8):637-641
  64. 64. Serrano C et al. Features identification for automatic burn classification. Burns. 2015;41(8):1883-1890
  65. 65. Taib BG et al. Artificial intelligence in the management and treatment of burns: A systematic review and meta-analyses. Journal of Plastic, Reconstructive & Aesthetic Surgery. 2023;77:133-161
  66. 66. Berchialla P et al. Predicting severity of pathological scarring due to burn injuries: A clinical decision making tool using Bayesian networks. International Wound Journal. 2014;11(3):246-252
  67. 67. Kim J et al. Predicting the severity of postoperative scars using artificial intelligence based on images and clinical data. Scientific Reports. 2023;13(1):13448
  68. 68. Kelf TA et al. Scar tissue classification using nonlinear optical microscopy and discriminant analysis. Journal of Biophotonics. 2012;5(2):159-167
  69. 69. Lee S et al. Real-time burn classification using ultrasound imaging. Scientific Reports. 2020;10(1):5829
  70. 70. Lee S et al. A deep learning model for burn depth classification using ultrasound imaging. Journal of the Mechanical Behavior of Biomedical Materials. 2022;125:104930
  71. 71. Li H et al. Non-invasive medical imaging technology for the diagnosis of burn depth. International Wound Journal. 2024;21(1):e14681
  72. 72. van Langeveld I et al. Multiple-drug resistance in burn patients: A retrospective study on the impact of antibiotic resistance on survival and length of stay. Journal of Burn Care & Research. 2017;38(2):99-105
  73. 73. McKee AC, Robinson ME. Military-related traumatic brain injury and neurodegeneration. Alzheimer's & Dementia. 2014;10:S242-S253
  74. 74. Howlett JR, Nelson LD, Stein MB. Mental health consequences of traumatic brain injury. Biological Psychiatry. 2022;91(5):413-420
  75. 75. Wisler JR et al. Competing Priorities in the Brain Injured Patient: Dealing with the Unexpected. Brain Injury: Pathogenesis, Monitoring, Recovery and Management. Rijeka, Croatia: InTechOpen; 2012. pp. 341-354
  76. 76. Stawicki SP, Lindsey DE. Missed traumatic injuries: A synopsis. International Journal of Academic Medicine. 2017;3(Suppl. 1):S13-S23
  77. 77. Hale AT et al. Using an artificial neural network to predict traumatic brain injury. Journal of Neurosurgery: Pediatrics. 2018;23(2):219-226
  78. 78. Alouani AT, Elfouly T. Traumatic brain injury (TBI) detection: Past, present, and future. Biomedicine. 2022;10(10):2472
  79. 79. Vedaei F et al. Identification of chronic mild traumatic brain injury using resting state functional MRI and machine learning techniques. Frontiers in Neuroscience. 2023;16:1099560
  80. 80. Mohamed M et al. Prognosticating outcome using magnetic resonance imaging in patients with moderate to severe traumatic brain injury: A machine learning approach. Brain Injury. 2022;36(3):353-358
  81. 81. Vivaldi N et al. Evaluating performance of EEG data-driven machine learning for traumatic brain injury classification. IEEE Transactions on Biomedical Engineering. 2021;68(11):3205-3216
  82. 82. Albert B et al. Automatic EEG processing for the early diagnosis of traumatic brain injury. Procedia Computer Science. 2016;96:703-712
  83. 83. Niso G et al. Wireless EEG: A survey of systems and studies. NeuroImage. 2023;269:119774
  84. 84. Lai CQ et al. Artifacts and noise removal for electroencephalogram (EEG): A literature review. In: 2018 IEEE Symposium on Computer Applications & Industrial Electronics (ISCAIE). Penang, Malaysia: IEEE; 2018
  85. 85. Xie Y, Oniga S. A review of processing methods and classification algorithm for EEG signal. Carpathian Journal of Electronic & Computer Engineering. 1 Sep 2020;13(1):23-29
  86. 86. Shalaby A et al. Artificial intelligence based computer-aided diagnosis applications for brain disorders from medical imaging data. Frontiers in Neuroscience. 2023;17:998818
  87. 87. Mitra J et al. Statistical machine learning to identify traumatic brain injury (TBI) from structural disconnections of white matter networks. NeuroImage. 2016;129:247-259
  88. 88. Luo X et al. Machine learning classification of mild traumatic brain injury using whole-brain functional activity: A radiomics analysis. Disease Markers. 2021;2021(1):3015238
  89. 89. Abdelrahman HAF et al. Combining multiple indices of diffusion tensor imaging can better differentiate patients with traumatic brain injury from healthy subjects. Neuropsychiatric Disease and Treatment. Dec 2022;31(12):1801-1814
  90. 90. Dhillon NS et al. A raspberry pi-based traumatic brain injury detection system for single-channel electroencephalogram. Sensors. 2021;21(8):2779
  91. 91. Thanjavur K et al. Recurrent neural network-based acute concussion classifier using raw resting state EEG data. Scientific Reports. 2021;11(1):1-19
  92. 92. Czech A. Brain-computer interface use to control military weapons and tools. In: Control, Computer Engineering and Neuroscience: Proceedings of IC Brain Computer Interface 2021. Cham, Switzerland: Springer; 2021
  93. 93. Courtney A, Courtney M. The complexity of biomechanics causing primary blast-induced traumatic brain injury: A review of potential mechanisms. Frontiers in Neurology. 2015;6:221
  94. 94. Lai CQ et al. Convolutional neural network utilizing error-correcting output codes support vector machine for classification of non-severe traumatic brain injury from electroencephalogram signal. IEEE Access. 2021;9:24946-24964
  95. 95. Schmid W et al. Review of wearable technologies and machine learning methodologies for systematic detection of mild traumatic brain injuries. Journal of Neural Engineering. 2021;18(4):041006
  96. 96. Eshel I, Marion DW. Traumatic brain injury (TBI): Current diagnostic and therapeutic challenges. In: Traumatic Brain Injury: A Clinician’s Guide to Diagnosis, Management, and Rehabilitation. Cham, Switzerland: Springer; 2019. pp. 421-437
  97. 97. Cummings D. Multimodal Interaction for Enhancing Team Coordination on the Battlefield. Texas, USA: Texas A&M University, College Station; 2013. Available from: https://oaktrust.library.tamu.edu/bitstream/handle/1969.1/151044/CUMMINGS-DISSERTATION-2013.pdf?isAllowed=y&sequence=1
  98. 98. Stawicki SP et al. Prognostication of traumatic brain injury outcomes in older trauma patients: A novel risk assessment tool based on initial cranial CT findings. International Journal of Critical Illness and Injury Science. 2017;7(1):23-31
  99. 99. Yao S et al. Helsinki computed tomography scoring system can independently predict long-term outcome in traumatic brain injury. World Neurosurgery. 2017;101:528-533
  100. 100. Thara T, Thakul O. Application of machine learning to predict the outcome of pediatric traumatic brain injury. Chinese Journal of Traumatology. 2021;24(06):350-355
  101. 101. Rau C-S et al. Mortality prediction in patients with isolated moderate and severe traumatic brain injury using machine learning models. PLoS One. 2018;13(11):e0207192
  102. 102. Feng J-Z et al. Comparison between logistic regression and machine learning algorithms on survival prediction of traumatic brain injuries. Journal of Critical Care. 2019;54:110-116
  103. 103. Chong S-L et al. Predictive modeling in pediatric traumatic brain injury using machine learning. BMC Medical Research Methodology. 2015;15(1):1-9
  104. 104. Gravesteijn BY et al. Machine learning algorithms performed no better than regression models for prognostication in traumatic brain injury. Journal of Clinical Epidemiology. 2020;122:95-107
  105. 105. Raj R et al. Machine learning-based dynamic mortality prediction after traumatic brain injury. Scientific Reports. 2019;9(1):1-13
  106. 106. Rajaei F et al. AI-based decision support system for traumatic brain injury: A survey. Diagnostics. 2023;13(9):1640
  107. 107. Kaźmierski R. Brain injury mobile diagnostic system: Applications in civilian medical service and on the battlefield—General concept and medical aspects. Journal of Clinical Ultrasound. 2023;51(9):1598-1606
  108. 108. Stonko DP et al. Artificial intelligence in trauma systems. Surgery. 2021;169(6):1295-1299
  109. 109. Rojas-Muñoz E, Couperus K, Wachs JP. The AI-medic: An artificial intelligent mentor for trauma surgery. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization. 2021;9(3):313-321
  110. 110. Bolourani S et al. Cleaning up the MESS: Can machine learning be used to predict lower extremity amputation after trauma-associated arterial injury? Journal of the American College of Surgeons. 2021;232(1):102-113. e4
  111. 111. Perkins ZB et al. Predicting the outcome of limb revascularization in patients with lower-extremity arterial trauma: Development and external validation of a supervised machine-learning algorithm to support surgical decisions. Annals of Surgery. 2020;272(4):564-572
  112. 112. Feng Y-N et al. Intelligent prediction of RBC demand in trauma patients using decision tree methods. Military Medical Research. 2021;8(1):1-12
  113. 113. Hamada SR et al. European trauma guideline compliance assessment: The ETRAUSS study. Critical Care. 2015;19(1):1-8
  114. 114. Lang E et al. Clinical decision support for severe trauma patients: Machine learning based definition of a bundle of care for hemorrhagic shock and traumatic brain injury. Journal of Trauma and Acute Care Surgery. 2022;92(1):135-143
  115. 115. Beekley AC, Bohman H, Schindler D. Modern warfare. In: Combat Casualty Care: Lessons Learned from OEF and OIF. Fort Detrick, MD, USA: U.S. Surgeon General, Borden Institute; 2012. pp. 1-37
  116. 116. Syed M et al. Application of machine learning in intensive care unit (ICU) settings using MIMIC dataset: Systematic review. in informatics. MDPI. 2021;8:16
  117. 117. Mou Z et al. Electronic health record machine learning model predicts trauma inpatient mortality in real time: A validation study. Journal of Trauma and Acute Care Surgery. 2022;92(1):74-80
  118. 118. Staziaki PV et al. Machine learning combining CT findings and clinical parameters improves prediction of length of stay and ICU admission in torso trauma. European Radiology. 2021;31(7):5434-5441
  119. 119. Abujaber A et al. Using trauma registry data to predict prolonged mechanical ventilation in patients with traumatic brain injury: Machine learning approach. PLoS One. 2020;15(7):e0235231
  120. 120. Rashidi HH et al. Early recognition of burn-and trauma-related acute kidney injury: A pilot comparison of machine learning techniques. Scientific Reports. 2020;10(1):1-9
  121. 121. Seibert K et al. A rapid review on application scenarios for artificial intelligence in nursing care. JMIR Preprints. 2020;16(12):2020
  122. 122. Ahmed Z et al. Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine. Database. 2020;2020:baaa010
  123. 123. Horng S et al. Creating an automated trigger for sepsis clinical decision support at emergency department triage using machine learning. PLoS One. 2017;12(4):e0174708
  124. 124. Huang Q et al. A new warning scoring system establishment for prediction of sepsis in patients with trauma in intensive care unit. Zhonghua Wei Zhong Bing Ji Jiu Yi Xue. 2019;31(4):422-427
  125. 125. Sibinga CTS. Transfusion medicine: From AB0 to AI (artificial intelligence). Exon Publications. Apr 2022;29(4):107-119
  126. 126. Varghese A, Thilak K, Thomas SM. Technological advancements, digital transformation, and future trends in blood transfusion services. International Journal of Advances in Medicine. 2024;11(2):147
  127. 127. Stawicki SP et al. Roadmap for the development of academic and medical applications of blockchain technology: Joint statement from OPUS 12 global and litecoin cash foundation. Journal of Emergencies, Trauma, and Shock. 2019;12(1):64-67
  128. 128. Stawicki SP, Firstenberg MS, Papadimos TJ. What's new in academic medicine? Blockchain technology in health-care: Bigger, better, fairer, faster, and leaner. International Journal of Academic Medicine. 2018;4:1-11
  129. 129. Maynard S et al. Machine learning in transfusion medicine: A scoping review. Transfusion. 2023;64:162-184
  130. 130. Falzone E et al. Triage in military settings. Anaesthesia Critical Care & Pain Medicine. 2017;36(1):43-51
  131. 131. Feng Y et al. Machine learning for predicting preoperative red blood cell demand. Transfusion Medicine. 2021;31(4):262-270
  132. 132. Delaney M et al. Transfusion reactions: Prevention, diagnosis, and treatment. The Lancet. 2016;388(10061):2825-2836
  133. 133. Barry N et al. An exploratory, hypothesis-generating, meta-analytic study of damage control resuscitation in acute hemorrhagic shock: Examining the behavior of patient morbidity and mortality in the context of plasma-to-packed red blood cell ratios. International Journal of Academic Medicine. 2016;2(2):159-172
  134. 134. Jones JM et al. Has the trend of declining blood transfusions in the United States ended? Findings of the 2019 national blood collection and utilization survey. Transfusion. 2021;61:S1-S10
  135. 135. Lopes MG et al. Big data in transfusion medicine and artificial intelligence analysis for red blood cell quality control. Transfusion Medicine and Hemotherapy. 2023;50(3):163-173
  136. 136. Flatt JF, Bawazir WM, Bruce LJ. The involvement of cation leaks in the storage lesion of red blood cells. Frontiers in Physiology. 2014;5:81813
  137. 137. Melzak KA et al. Hemolysis pathways during storage of erythrocytes and inter-donor variability in erythrocyte morphology. Transfusion Medicine and Hemotherapy. 2021;48(1):39-47
  138. 138. Aung HH et al. Procoagulant role of microparticles in routine storage of packed red blood cells: Potential risk for prothrombotic post-transfusion complications. Pathology. 2017;49(1):62-69
  139. 139. Barshtein G, Manny N, Yedgar S. Circulatory risk in the transfusion of red blood cells with impaired flow properties induced by storage. Transfusion Medicine Reviews. 2011;25(1):24-35
  140. 140. Nikouline A et al. Machine learning in the prediction of massive transfusion in trauma: A retrospective analysis as a proof-of-concept. European Journal of Trauma and Emergency Surgery. Jan 2024;24(1):1-9
  141. 141. Seong H et al. Explainable artificial intelligence for predicting red blood cell transfusion in geriatric patients undergoing hip arthroplasty: Machine learning analysis using national health insurance data. Medicine. 2024;103(8):e36909
  142. 142. Liu L-P et al. Machine learning for the prediction of red blood cell transfusion in patients during or after liver transplantation surgery. Frontiers in Medicine. 2021;8:632210
  143. 143. Sibinga CTS. Artificial intelligence in transfusion medicine and its impact on the quality concept. Transfusion and Apheresis Science. 2020;59(6):103021
  144. 144. Smit Sibinga CT. Transfusion Medicine: From AB0 to AI (Artificial Intelligence). Brisbane, Australia; Apr 2022;29:107-119
  145. 145. Bilgic E et al. Exploring the roles of artificial intelligence in surgical education: A scoping review. The American Journal of Surgery. 2022;224(1):205-216
  146. 146. Spirnak JR, Antani S. The need for artificial intelligence curriculum in military medical education. Military Medicine. 1 May 2024;189(5-6):954-958
  147. 147. Park JJ, Tiefenbach J, Demetriades AK. The role of artificial intelligence in surgical simulation. Frontiers in Medical Technology. 2022;4:1076755
  148. 148. Kotwal RS et al. The effect of prehospital transport time, injury severity, and blood transfusion on survival of US military casualties in Iraq. Journal of Trauma and Acute Care Surgery. 2018;85(1S):S112-S121
  149. 149. Howard JT et al. Reexamination of a battlefield trauma golden hour policy. Journal of Trauma and Acute Care Surgery. 2018;84(1):11-18
  150. 150. Maddry JK et al. Impact of prehospital medical evacuation (MEDEVAC) transport time on combat mortality in patients with non-compressible torso injury and traumatic amputations: A retrospective study. Military Medical Research. 2018;5:1-8
  151. 151. Dolan CP et al. Prolonged field care for traumatic extremity injuries: Defining a role for biologically focused technologies. npj Regenerative Medicine. 2021;6(1):6
  152. 152. Birch E et al. Trauma THOMPSON: Clinical decision support for the frontline medic. Military Medicine. 2023;188(Suppl. 6):208-214
  153. 153. Scalese RJ et al. Simulation-based education improves military trainees’ skill performance and self-confidence in tourniquet placement: A randomized controlled trial. Journal of Trauma and Acute Care Surgery. 2022;93(2S):S56-S63
  154. 154. Mank VMF et al. Improving self-confidence of military medical providers with joint procedure simulation: A pilot study. Military Medicine. 2023;188(1-2):e382-e387
  155. 155. Jin X et al. Development and validation of a mathematical model to simulate human cardiovascular and respiratory responses to battlefield trauma. International Journal for Numerical Methods in Biomedical Engineering. 2023;39(1):e3662
  156. 156. Goolsby C, Deering S. Hybrid simulation during military medical student field training—A novel curriculum. Military Medicine. 2013;178(7):742-745
  157. 157. Niu A et al. The effectiveness of simulation-based training on the competency of military nurses: A systematic review. Nurse Education Today. 2022;119:105536
  158. 158. Crump C, Schlachta-Fairchild LM. Achieving a trusted, reliable, AI-ready infrastructure for military medicine and civilian care. In: Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II. SPIE; 2020. Available from: https://www.spiedigitallibrary.org/proceedings/Download?fullDOI=10.1117/12.2557514
  159. 159. Mackenzie CF et al. Virtual reality and haptic interfaces for civilian and military open trauma surgery training: A systematic review. Injury. 2022;53(11):3575-3585
  160. 160. Stawicki SP. Artificial Intelligence in Medicine and Surgery: An Exploration of Current Trends, Potential Opportunities, and Evolving Threats-Volume 1. London, UK: IntechOpen; 2023
  161. 161. Stawicki SP et al. Introductory chapter: Artificial intelligence in healthcare–where do we go from here? In: Artificial Intelligence in Medicine and Surgery-An Exploration of Current Trends, Potential Opportunities, and Evolving Threats-Volume 1. London, UK: IntechOpen; 2023
  162. 162. Board DI. AI principles: Recommendations on the ethical use of artificial intelligence by the department of defense. Supporting Document, Defense Innovation Board. 2019;2:3
  163. 163. Schoenemeyer J. The Implementation of Lethal AI Systems on the Battlefield and its Implication on Warfare. 2023. Available from: https://dspace.cuni.cz/bitstream/handle/20.500.11956/186846/120457821.pdf?sequence=1
  164. 164. Oniani D et al. Adopting and expanding ethical principles for generative artificial intelligence from military to healthcare. npj Digital Medicine. 2023;6(1):225
  165. 165. Kavitha M et al. Systematic view and impact of artificial intelligence in smart healthcare systems, principles, challenges and applications. Machine Learning and Artificial Intelligence in Healthcare Systems. Jan 2023;3(1):25-56
  166. 166. Lambert WC et al. Artificial intelligence and the scientific method: How to cope with a complete oxymoron. Clinics in Dermatology. 2024;42:275-279
  167. 167. Choung H, David P, Ross A. Trust and ethics in AI. AI & Society. 2023;38(2):733-745
  168. 168. Shin D. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies. 2021;146:102551
  169. 169. Zuiderveen, Borgesius F. Discrimination, Artificial Intelligence, and Algorithmic Decision-Making. Council of Europe. Strasbourg, France: Directorate General of Democracy; 2018. p. 42
  170. 170. Munn L. The uselessness of AI ethics. AI and Ethics. 2023;3(3):869-877
  171. 171. Rességuier A, Rodrigues R. AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society. 2020;7(2):2053951720942541
  172. 172. Lauer D. You cannot have AI ethics without ethics. AI and Ethics. 2021;1(1):21-25

Written By

Nathaniel Meyer, Lauryn Ullrich, Zachary Goldsmith, Daniel Paul Verges, Thomas J. Papadimos and Stanislaw P. Stawicki

Submitted: 19 April 2024 Reviewed: 29 May 2024 Published: 16 July 2024