Open access peer-reviewed chapter - ONLINE FIRST

Perspective Chapter: Future Impact of Artificial Intelligence on Medical Subspecialties – Dermatology and Neurology

Written By

Nadia Abidi, Zehara Abidi, Brian Hanrahan, Mini Parampreet Kaur, Yemesrach Kerego, Anna Ng Pellegrino and Venkatraman Thulasi

Submitted: 06 January 2024 Reviewed: 25 June 2024 Published: 26 July 2024

DOI: 10.5772/intechopen.115279

Artificial Intelligence in Medicine and Surgery - An Exploration of Current Trends, Potential Opportunities, and Evolving Threats - Volume 2 IntechOpen
Artificial Intelligence in Medicine and Surgery - An Exploration ... Edited by Stanislaw P. Stawicki

From the Edited Volume

Artificial Intelligence in Medicine and Surgery - An Exploration of Current Trends, Potential Opportunities, and Evolving Threats - Volume 2 [Working Title]

Dr. Stanislaw P. Stawicki

Chapter metrics overview

View Full Metrics

Abstract

Without a doubt, academic medicine and research fields have been greatly impacted by the recent introduction of artificial intelligence (AI) machines and software programs. For subspecialties, such as dermatology and neurology, AI systems have been integrated to assist in the management of workflow in the office and clinical settings. This chapter highlights a review of the most up-to-date AI tools for clinical applications in dermatology, and its impact on telemedicine and medical education. Our authors also comment on challenges with AI in dermatology, particularly with consumer trust. Within the field of neurology, the authors examined the impact of AI technologies in imaging interpretation, electroencephalography (EEG) interpretation, in the neuro-intensive care unit (ICU) setting, for stroke events, epilepsy, and neurodegenerative conditions. We conclude our chapter with a brief overview of job security and the implications for medical professionals to work more with AI in the future.

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • neurology
  • dermatology
  • telemedicine
  • total body photography
  • dermoscopy
  • electroencephalography
  • neuroimaging
  • epilepsy
  • neurodegenerative disease
  • cerebrovascular accidents
  • neuro-critical care

1. Introduction

Artificial intelligence (AI) was first coined by John McCarthy in the 1950s, and described as machines created by science and engineering that have the potential to be as complex and smart as humans [1]. It was not until the 1970s that AI was incorporated and accepted in medicine via the MYCIN software program at Stanford University [2, 3]. Seventy years later, McCarthy’s vision of AI has come closer to mimicking human thinking, as AI technologies evolved into machine learning (ML) capabilities and even further, deep Learning (DL) methods.

In machine learning, machines are trained to perform tasks based on data rather than programming instructions [4]. With time, ML paved the way to the concept of deep learning, where artificial neural networks (ANN) are created to mimic a more humanistic neural network of the brain [5]. Lecun et al. [6] describes DL as a complex layered system of algorithms; the representation of one layer that starts with raw data is fed and transformed into the next layer representation that enables learning highly complex functions.

Within the past decade, subspecialties in internal medicine have turned to AI algorithms that focus on clinical decision support systems (CDSS). These programs as tools for practitioners to help manage information and make clinical decisions [7, 8]. Alerts, reminders, clinical guidelines, condition-specific order sets, data reports, documentation templates, drug dosage calculators, diagnostic support, and databases all fall under the CDSS umbrella [9].

Our authors wish to discuss the impact of AI on specific medical subspecialties dermatology and neurology. In dermatology, DL applications have provided tools for practitioner use since the 1990s. As an example, advancements in DL and its utility of convolution neural networks (CNN) software enabled high-resolution image processing so that cancerous lesions can be classified and managed. Application of AI in neuroimaging and EEG development helps diagnose types of seizures and neuromuscular disorders. In addition to highlighting practical uses and applications, we also discuss the limitations and challenges of AI use in the future, particularly in job security and consumer trust.

Advertisement

2. Artificial intelligence in dermatology

2.1 Applications of AI by dermatologists and non-specialty physicians

The use of AI in dermatology dates back to the 1990s when classical machine learning (ML) was used to differentiate melanoma from other benign pigmented tumors using clinical images [10]. Deep Learning was introduced in 2012 as a subset of ML, helping to revolutionize imaging processing abilities [11]. This led to an explosion in the development and research of AI applications within the field, with current applications in clinical diagnosis, dermatopathology, disease risk assessment/outcome prediction, practitioner education, and practice flow.

In artificially controlled study settings, AI has been shown to have the exciting potential to improve the diagnosis and management of skin cancers [12]. The implementation of AI in clinical practice, particularly in dermatology, faces challenges despite its potential and advancements on the research front. The need for extensive clinical data to develop and refine AI diagnostic tools is one of the significant hurdles.

AI is currently used in an adjunct supportive role by some dermatologists alongside total body photography (TBP). Total body photography is often used in higher-risk patients such as those with hundreds of moles, or a history of melanoma. This program assists in sorting and storing visual images of nevi and comparing images over time (multiple patient visits) to look for changes [13]. Market-approved AI software like the MoleAnalyzer Pro (FotoFinder), Vectra WBS 360 (Canfield), and Dermoscan X2 are AI systems that are used for melanocytic and non-melanocytic lesion analysis, tracking and total body mapping [13, 14]. In a prospective clinical study by Winker et al. [14], sensitivity, specificity, and mean accuracy of classification of melanocytic lesions showed statistically significant improvements with the use of MoleAnalyzer input alongside clinician input.

Other market-approved AI software systems and apps include Automatically Identify Skin Disorders (AIDERMA) by Diingxiangyuan, Visia Skin by Canfield, and Antera-3D by Miravex, which are used to evaluate skin quality, such as wrinkles, texture, pigmentation, and redness [15]. For detection and evaluation of specific diagnoses, there is Kin-App by Swiss4ward to diagnose hand eczema, and Neurodermatitis Helferin|Nia by Nia Health, which focuses on neurodermatitis [15]. AI-powered technology and applications serve as useful educational tools for providers still in training and those in practice. One such novel tool, a mobile game called Top Derm (Level Ex), utilizes an AI-based image generator to create accurate imagery of skin disorders based on a pooled dataset of images [16]. MelaFind was a previously FDA-approved AI-based total body imaging system used clinically to provide additional information on suspicious pigmented lesions. Previous data had shown an increased sensitivity and specificity in recognition of melanomas with its use, in combination with physician evaluation. However, it was discontinued in 2017, and with continued use, the clinical benefit in terms of increased sensitivity and specificity of melanoma recognition was negligible [17].

AI is also currently used in a supportive role with dermoscopy procedures [18], wherein a handheld magnifying lens called a dermatoscope is used to noninvasively differentiate types of skin cancer by examining skin lesions within the first few layers of the skin under polarized and non-polarized light. There are many of these smartphone applications that are usually marketed to and used by non-specialty physicians and healthcare professionals, but they are not FDA-approved. Furthermore, a 2020 systematic review of nine studies looking at six AI-based smartphone applications that were developed to assess the risk of skin cancer in suspicious lesions found that they had poor and variable performance and detected melanoma unreliably [18].

Perhaps the most widely incorporated AI-based technologies within dermatology include those dealing with practice flow and patient education. A time-consuming aspect of day-to-day medical practice involves the prior authorization and appeal process that is often required to ensure coverage of prescribed medications by a patient’s health insurance. Companies such as Waystar provide, for a fee, automation of the prior authorization process through their AI software. ChatGPT (OpenAI), a free AI system, has been widely used in practice to create letters of medical necessity for appealing denials in coverage of medications as well as to create educational patient information and post-procedure care handouts.

2.2 Effect of Al on the daily workflow of a dermatologist

As a pattern recognition-based specialty, AI can transform the work of a dermatologist both in and outside of the clinical setting.

One of the potential implementations of AI in dermatology is improving the accuracy of diagnosis of skin lesions, thus guiding treatment decisions. Similar to how a dermatologist accumulates the knowledge to make an accurate diagnosis through their exposure to various cases with time, convolutional neural networks (CNN)––a type of DL software––uses image classification algorithms to generate diagnoses by analyzing and drawing parallels to a database of “learned” images [19]. Several studies have assessed the accuracy of such algorithms, particularly for skin cancers. Study by Esteva et al. [20] analyzed the accuracy of one such CNN, which was trained using 129,450 images of 2032 diseases, in the diagnosis of biopsy-proven skin cancers. The study team determined that the CNN was comparable to that of 21 board-certified dermatologists [20]. Gao et al. [21] designed an automated light microscope that utilizes DL to aid in the detection of the fungus on potassium hydroxide prep slides of skin, nail, and hair samples. In a single study, it was found to be as skillful as human inspectors when used for skin and nail samples, but not for hair samples [21]. Although it has not been tested or used in a clinical setting yet, this technology could enhance the speed and sensitivity of an often daily used tool in clinical dermatology.

AI may also help augment clinical assessment during in-person encounters with the use of reflectance confocal microscopy (RCM) [22]. RCM is a noninvasive imaging modality that allows for the examination of layers of skin down to the superficial dermis at the cellular level. Training for RCM image acquisition and interpretation is not widely available, and interpretation is known to be somewhat subjective [22]. Developing an AI-based algorithm for use in RCM could allow for widespread use of the technology and create an objective, more reproducible interpretation. This standardization would invariably help in categorizing features seen on RCM for future studies.

In addition to aiding in the diagnosis of a lesion, AI can also improve a dermatologist’s clinical workflow in tele-dermatology. Following the COVID-19 pandemic, telemedicine has become a standard of care that provides its own unique set of advantages and challenges in Dermatology. Securing an in-person dermatology appointment may be challenging for many patients. For one, it may take months to secure an appointment for many clinics. In addition, insurance carriers require a referral by a primary care physician first. As such, primary care providers (PCPs) are often the first to assess skin lesions. In these cases, AI tools can be useful in appropriately triaging patients for referral to a dermatologist, especially because the early detection of skin cancer is crucial to favorable patient outcomes. A 2022 study by Majidian et al. [23] found no statistically significant difference between an AI triage software––when accounting for the top three differential diagnoses––and a panel of 3 dermatologists in the correct evaluation of 100 images of skin lesions. However, much like non-dermatology providers, the AI software had a harder time correctly identifying seborrheic keratoses (SK) as benign lesions. This highlights the need for further training data sets for the AI software as well as further PCP education on the identification of SKs, and more generally, the shortfalls of software, to reduce unnecessary referrals.

AI algorithms may also be useful in verifying patient image quality before a tele-dermatology visit. Manual review of patient photos before a telehealth visit requires a significant portion of a clinician’s time. As a result, poor-quality photos not only disrupt clinic workflow but can also delay care or require an in-person visit. AI applications could assist patients in ensuring good quality photos are taken by notifying the patient if, for example, the image is blurry or inadequate lighting is being used. Currently, one such application, TrueImage-SEAL, is under clinical investigation [24].

AI also appears to be a promising tool for dermatopathologists. Previous retrospective studies have shown that CNN models have had high accuracy in the histopathological differentiation of melanoma from nevi and basal cell carcinomas (BCC) from normal tissue (89 and 91.4% accuracy, respectively) [25, 26]. However, another study evaluating the real-world application of three CNNs to classify 13,537 histopathologic slides into four classes (basaloid, squamous, melanocytic, and other) only found a 78% overall accuracy. This outcome reinforces the need for the expansion of datasets for training and suggests the use of AI software as an adjunct to clinician evaluation rather than a fully independent tool [27].

2.3 AI applications in the education and training of medical students and dermatology residents

Gaining valuable knowledge and skills in AI has become unavoidable for medical students and residents in the 2020s [28]. AI tools can be adapted within the education curriculums of those in training, leading to a higher quality of patient care in the future. Software can be developed to provide a more individualized, objective, and efficient approach to evaluating a trainee’s knowledge in written exams and mock case scenarios [29, 30]. Applications significant to training and education in dermatology include image classification, image-based diagnosis, ulcer assessment, disease prediction, pathology and gene expression, and clinical decision support [31, 32]. Opportunities for further AI integration should focus on more sophisticated patient simulations, personalized learning plans, and real-time feedback during clinical rotations.

2.4 Challenges of AI in dermatology

Along with the rapid development of dermatological AI are several challenges and liabilities associated with it [33].

  1. Data access: Developers face difficulties in obtaining high-quality image data required to create effective AI tools to help in dermatology. Clear and well-captured images are essential for accurate diagnosis and analysis. Low-quality and incorrectly labeled images can lead to erroneous training of AI models [34].

  2. Bias: Limitations and bias in data used to develop DL software can reduce patient safety, and effectiveness, leading to treatment disparities. Data bias arises from imbalances in the training data. Image datasets could lack representation of diverse skin types. These systems can reflect and amplify the inequalities prevalent in society [35].

  3. Scaling and integration: AI tools can be challenging to integrate into new settings because of the distinctness of institutions, changing patient populations, patient preferences, local practices, and available resources. AI tools should be customizable to fit the specific needs of each institution. It can recognize only one or a group of specific dermatological diseases. Some datasets may be incomplete preventing the algorithm from accurately diagnosing rare diseases.

  4. Lack of transparency: AI tools often lack transparency, which could reduce trust in the tool. As an example, Amazon had to abandon its AI recruiting tool that was biased against women. Transparency can allow the wider AI community to download the model and tweak it.

  5. Shortage of combined medical and AI complex talents: There is a scarcity of multidisciplinary personnel in computer science and medicine.

  6. Privacy: As more AI systems are developed, larger amount of data will be accessible to more individuals and organizations, heightening privacy risks and concerns. Biometric data that use fingerprinting, facial recognition, and other biometric technologies can collect sensitive data. Data shared for one purpose like facial photographs for dermatological diagnosis can be repurposed for training AI without consent.

  7. Regulation and governance: There are no clear guidelines for the use of AI responsibly in healthcare [36]. The law as it stands is ill-fitted for a world where diagnoses and treatment decisions are made by AI. Liability is determined by the type of AI/ML at issue, and most often at times, an individual physician may be held liable for medical malpractice and negligence if AI use puts them in jeopardy [37]. There will be an evolution in medical practice over the next few decades and a subsequent delay in the development of laws pertaining to AI/DL use in medicine, as it tackles some of the issues case by case [38].

AI is not human and hence unable to take accountability for its actions, the term “trustworthiness” better suits customers’ reliance on AI [39]. Multiple stakeholders, including healthcare providers, patients, and AI algorithms, influence this trustworthiness [40]. As patient autonomy is a fundamental principle in medical ethics, the right to reject AI assistance crucially affects system trustworthiness [41]. Thus, to foster AI innovation, development, and implementation in the medical sphere, a comprehensive understanding of the role of consumer trust and its role as moderator between AI and its practical healthcare applications is vital.

2.5 Consumer trust toward AI in dermatology

AI has been a relevant area of dermatological research and discussion for a long time. With medical advances, enhanced quality of care, and reduced work burden all attributed to AI adoption, patients and physicians alike stand to benefit from the continued development and integration of software in dermatology [42, 43]. Studies suggest overzealous expectations among patients sculpted by mainstream media and scientific fiction, dramatically increase patient enthusiasm around AI [44]. Nevertheless, amid the excitement, the drawbacks linked to AI, including obstacles to privacy, biases, and loss of human touch, raise just as much skepticism that can affect consumers’ trust [43, 45].

2.5.1 Patient perceptions

Patients may approach the integration of AI with optimism, as studies have projected higher AI trust in dermatology than in high-stake fields such as radiology and surgery [46]. They contemplate that AI-assisted medicine will be able to increase accuracy and precision in the diagnosis and treatment plans of dermatologic cases, especially in melanoma [47, 48, 49]. Moreover, with the perception that AI will be able to free up physicians’ time from other tasks such as note-taking, patients anticipate a much-needed increase in physician-patient interaction time. This is expected to empower patients through a better understanding of their healthcare needs and custom-tailored medical attention, thus improving their experience in the long run [50]. Patients envision convenience in their ability to ask questions without fear of being perceived as burdensome or intrusive as a core component of expectations concerning AI integration [45, 47, 51]. Lastly, the AI system can be perceived by patients to be impartial and objective to stigmas that could be linked to discussing sensitive health issues with human physicians [45, 49].

Despite some positive attitudes toward AI use, there are patient concerns about embracing new technologies. Most significant is the perception by patients of the loss of human contact in medicine [45, 47, 48, 50]. This holds considerable weight, as there has been a reduction in the satisfaction level regarding medical care in the past decade, particularly with the provider-patient relationship [52]. Interestingly, receptiveness or resistance to embracing AI among patients varies significantly based on a variety of factors such as socio-demographic backgrounds and educational level. For instance, younger, well-educated patients generally show more willingness to trust AI compared to older, less educated, or female populations [46, 53]. This is likely due to the increased technological literacy among the former demographic. Indeed, despite its ability to augment health literacy, an important area of concern lies in the concomitant potential of AI to further widen the literacy gap, particularly within communities with limited technological proficiency, creating unintended disparities [54].

Patients may also fear that as physicians rely more on AI tools, their overdependence can pose a threat to the ethical and moral foundation of medicine [47, 48]. Work, for instance, patients believe humans should be held accountable for their actions and decisions, rather than rely on an AI system to perform most of their work. Physicians should be in control of the AI application and resume accountability in circumstances where a medical error occurs because of AI [47, 53, 55].

In addition, many patients expressed worry regarding the issue of privacy. This concern resonates profoundly considering the sensitive nature of medical information and the prevalent issue of privacy breaches in today’s digital age [45, 47, 48, 49].

2.5.2 Dermatologists perceptions

Dermatologists may hold an optimistic perspective on the integration of AI into medicine, foreseeing multiple advantages that could change their practice. First, its use can serve as a great emancipator from administrative and note-taking tasks. This is a crucial shift, as studies show that notetaking takes up one-fourth to one-half of a physician’s daily schedule [56]. This will allow them to allocate more time to advancing their medical knowledge and engaging with patients. This shift, facilitated by AI, is expected to enhance overall work efficiency in the medical world [42, 50, 57].

In addition to decreasing administrative tasks, physicians anticipate that AI will be a powerful tool to improve clinical decision-making, especially in fields involving large amounts of image data [42, 58]. With its capability to process extensive data, dermatologists and dermatopathologists appreciate AI’s potential in improving the early diagnosis of skin cancer [29]. This prospect of AI is anticipated to enhance the overall quality of care and the timelines of clinical decisions [50, 57]. Additionally, the user-friendly nature of AI, along with its intelligibility, exemplified in virtual patient simulations, places it as an invaluable educational tool in physicians’ quest for learning and skill development [59].

As modern-day scientists continue to train to adhere to evidence-based medicine, there are challenges that arise in the implementation of AI in healthcare. In dermatology, studies in AI have focused primarily on skin tumors, with little attention given to inflammatory and autoimmune conditions [57, 60]. Moreover, there is a concern about bias in AI software. AI cannot justify the thought process underlying its output and decisions [60]. For instance, input into the DL system could be predominantly trained data on Caucasians. Thus, false diagnoses and misclassification of skin lesions can exist if software is applied to patients of different cultural backgrounds [43, 57]. It may be easy for a physician to rely on clinical decision-making based on what is generated from AI programs [58]. However, therein lies the ultimate lack of accountability and transparency in what AI software generates as valid, “truthful” data.

To summarize, AI is a promising field in dermatology with the potential to revolutionize the advancement of healthcare practices. However, the incorporation of AI in dermatological medicine has yielded contrasting results. In general, physicians and patients have a slightly more positive perception of AI in dermatology than in other specialties. However, critical areas of consideration hinder them from completely trusting AI. Addressing and mitigating these concerns through continued research and regulation is crucial for deploying AI as a tool and harvesting the benefits it promises.

Advertisement

3. Artificial intelligence in neurology

There likely is not a subspecialty that has been more excited about the potential advancements in diagnostic accuracy and clinical decision-making with AI than neurology. The concepts of artificial neural networks are inspired by biological neural networks that are closer to organic networks of thinking than past machine learning algorithms. In this section of the chapter, we will explore how researchers have used AI to supplement the clinical decision-making of various neurological disorders and how it may also be used to help with the interpretation of neuroimaging and other neurodiagnostic studies, most notably electroencephalography (EEG).

3.1 Image interpretation: neuroradiology

Computer-aided diagnosis (CAD) and abnormality detection have had applications in radiology for decades, relying on simple machine learning rather than DL networks to achieve diagnostic accuracy [61, 62]. This technology encompasses the concept of Radiomics, defined as the ability to analyze and mine high volumes of data from medical images and then develop a model that will aid in clinical decision-making.

While most people would agree that the most exciting utilization of AI for Neuroimaging would be related to imaging interpretation, the first uses of such programs will be limited. Alternatively, researchers have used DL algorithms to reduce the time to acquire MRI sequences and improve the resolution of lower quality imaging [63, 64]. Similar work has been done for CT imaging [65]. Interestingly, one group of researchers was able to generate synthetic CT scans based on MRI data, effectively creating imaging of differing modalities [66]. The applicability of such cross-modal synthetic image creation is unclear.

An ideal master computerized decision support (CDS) system for neuroimaging analysis would not only be able to differentiate between various anatomical structures and brain matter, but also be able to translate their findings into clinically valuable information. The former has been accomplished and validated in the most foundational way by a diffusion-weighted imaging (DWI) sequence based deep learning segmentation algorithm that was able to different between white matter, gray matter, cerebral spinal fluid, ventricles, and six subcortical structures (putamen, pallidum, hippocampus, caudate, amygdala, and thalamus) [67]. Most clinical correlate models to date have been built for specific pathologies and would not be generalizable as a nonspecific “Neuroradiology” CDS system. For example, models built to identify cerebral aneurysms and predict their risk of rupture are trained on imaging which has the pathology of interest [68, 69, 70]. Similar models have been built for the detection of brain tumors as well as the progression or regression of metastatic brain disease [71, 72]. Neuroradiologists should rest assured that AI models will not be taking their jobs anytime soon; There would need to be significant improvements in this realm before any models could be fully automated. At best, AI neuroradiology applications will be used in a semi-autonomous manner to increase diagnostic accuracy and clinical efficiency in the short term.

3.2 EEG interpretation with DL technology

The first human EEG was performed almost 100 years ago by German physician Hans Berger in 1924 [73]. Over the following years, EEG pioneers like Gibbs, Lenox, and Jasper learned to interpret the brainwave activity captured by electrodes placed on the brain or scalp and associate them clinically in an era where the only other options for neurological testing included lumbar puncture, pneumoencephalography, and ventriculography. Even after the development of CT, MRI, cerebral angiography, and magnetoencephalogram, the EEG has remained an indispensable tool for the practicing neurologist. For these reasons, one can understand why so much effort has been placed by researchers to see how artificial intelligence could aid in EEG analysis.

Researchers have been working on developing machine learning algorithms to aid in the identification of pathologic findings on the EEG for decades [74, 75]. Advancement in this field has been slow due to two significant factors. Firstly, data sets used to build the earliest EEG processing tools were built on relatively small data sets [75, 76, 77]. Secondly, variability in the quality of brain wave recording due to poor electrode conduction, electromyography artifact, and low signal to noise ratios made it difficult for primitive models to “filter” out distracting data. The former issue has been recently addressed by researchers at Temple University Hospital (TUH) who released the TUH-EEG corpus which at the time of this publication was the largest publicly available resource of EEG, with over 29 years’ worth of data [78].

While the earliest computer-assisted algorithms for EEG were only able to provide insight into the most basic aspects of brainwave activity, large EEG data sets like the TUH-EEG corpus have been used to develop machine learning algorithms to detect epileptic seizures and epileptiform discharges (seizure tendency) [79, 80]. This has been accomplished in one of two ways: through supervised learning or unsupervised learning. In supervised learning, an algorithm is “trained” to detect abnormal findings based on previously marked ictal data. The algorithm can then be used to detect abnormal findings on new data. In unsupervised learning, the algorithm automatically detects EEG data trends and outliers. This technique detects interictal epileptiform discharges based on being “outliers” from the remaining data. AI algorithms developed for identifying epileptiform discharges (via companies Encevis, SpikeNet, and Persyst) are commercially available and have been shown to have good specificity, sensitivity, and accuracy when attending oversight is provided. They can also reduce the reading time per study. However, without physician oversight, all three have a specificity too low for clinical implementation in a completely autonomous manner [81].

A group in Norway developed a neural network model called SCORE-AI (Standardized Computer-based Organized Reporting of EEG-Artificial Intelligence) which classified EEG recordings into four clinically relevant categories: (1) Epileptiform focal, (2) Epileptiform generalized, (3) non-epileptiform focal, and (4) non-epileptiform generalized [82]. This model achieved a degree of accuracy like human experts [82]. This is one of the first models shown to achieve expert-level performance in reading EEGs without provider oversight. In the future, a model like SCORE-AI could provide remote and underserved regions with EEG interpretation where expertise in EEG analysis is unavailable.

Similar machine-learning techniques have been applied to intracranial EEG recordings. This EEG monitoring modality is particularly helpful for patients with medically refractory epilepsy being screened for epilepsy surgery. The goal of this type of EEG is to locate the seizure onset zone with the highest degree of confidence before performing a resection, thus achieving the greatest chance for seizure reduction after a resection. AI algorithms have used interictal findings like High Frequency Oscillations (HFO), Interictal Epileptiform Discharges (IED), and Phase Amplitude Coupling (PAC) as markers to develop a map of the seizure onset zone [83]. A neural network trained by Grigsby et al. [84] used not only electrographic data, but also clinical, neurophysiologic, neuroimaging, and surgical data from 65 patients undergoing anterior temporal lobectomy to predict surgical outcomes. This model had a sensitivity of 80% and specificity of 83.3% in predicting a seizure-free outcome after resection. Such models could be significantly beneficial in providing patients with the most accurate outcomes from potential surgical intervention for their medically refractory epilepsy. The generalizability for patients with non-temporal lobe epilepsy will limit the use of such models.

Many AI models developed for analyzing EEG have been limited to adult patient populations, limiting any future applications to exclude pediatric patients. This is because the cortical activity of children differs significantly from their adult counterparts. A group in Japan has devised an AI technique that would allow the detection of interictal discharges from pediatric scalp EEGs [85]. Impressively, the diagnostic accuracy, sensitivity, and specificity of the model were able to be greater than 97%. The ideal model would be able to consider age in their analysis of EEG data.

One of the more interesting potential applications for AI analysis of EEG data is seizure prediction; the ability to predict seizures before their occurrence could have drastic implications for the morbidity and mortality of patients with epilepsy. Algorithms for seizure prediction rely on pre-ictal EEG patterns before seizures [86]. Ideally, any model that would be used clinically would review a patient’s EEG data in real-time via a neuromodulation device (like a responsive neurostimulator or deep brain stimulator), then predict seizures early enough to allow the patient to practice appropriate seizure safety measures (sitting/lying down, taking an abortive seizure medication, etc.). Neuromodulation devices would also then be able to provide a “stimulation treatment” that could reduce the probability of a seizure from occurring.

While most AI applications for EEG analysis have focused on automated epileptic seizure detection or baseline EEG interpretation, other algorithms have been built to interpret EEG data to aid in diagnosing other neurological disorders. For example, a group based in Italy has developed an artificial intelligence model named “xAI” to track EEG changes in patients who converted from mild cognitive impairment (MCI) to Alzheimer’s disease (AD). This was done by mapping segments of high-density EEG recording data into frequency maps by power spectral analysis. These maps were then used as input into a convolutional neural network (CNN), which was trained to detect changes that suggest a conversion from MCI to AD. The analysis revealed that there is increased delta frequency activity in the left temporal, left frontal, central-frontal, and parietal regions in patients who converted to AD [87].

3.3 AI applications within neurology subspecialties

Despite there being a relatively small number of Neurologists in the United States (roughly 11,340 total per the U.S. Bureau of Labor Statistics), the further subspecialization within neurology that these providers can pursue is impressive [88]; The American Board of Psychology and Neurology recognizes nine subspecialties and the United Council for Neurologic Subspecialties recognize an additional nine (18 total) [89, 90]. For this reason, we felt that it would be most beneficial to review some of the most interesting ways AI is being utilized in some of these particular subspecialties.

3.3.1 Neuro-critical care

The ICU is a place where the most ill patients reside in a hospital. Patients usually have active disease processes across multiple systems and providers who work in this environment are at risk of decision errors and a high cognitive burden due to this information overload [91, 92]. Artificial intelligence and big data science have been applied in this environment to help with early diagnosis/detection of pathologic conditions and outcome prognostication. These models have been built and implemented largely as CDS systems to help clinicians proactively act to prevent a negative outcome or minimize morbidity/mortality [93]. One Bayesian Artificial Neural Network (BANN) was able to predict hypotensive events 15 minutes before they occurred with a specificity of 91% but low sensitivity [94]. A predictive model built with multivariate logistic regression and Gaussian processes based on the analysis of intracranial pressure and mean arterial blood pressure has been shown to predict clinically significant increases in intracranial pressure 30 minutes before their occurrence [95]. The disease specific applications of CDS systems in the neuro-ICU are only limited by the disease variability seen there.

3.3.2 Cerebrovascular events

Acute ischemic stroke (AIS) is arguably the most notable neurologic disease where an improvement in clinical decision-making can lead to significant improvements in morbidity and mortality. This improvement is most directly related to the efficiency in clinical decision-making as 2 million neurons die every minute after an AIS [96]. AI-based clinical decision support systems for intra-arterial therapy that analyze the amount of salvageable cerebral tissue with intervention already exist and have been used clinically for years [97]. Most research in this patient population has been focused on predicting long-term disability and final infarct size. A recent systemic review marked that the publications on this subject matter contain several limitations and validity threats including but not limited to a lack of standardization of outcome measures between studies, under-reporting of race or socioeconomic statuses, and limited clinical applicability of created models [98]. For example, many models are built to incorporate both clinical information (located in an electronic health record (EHR)) and raw imaging information (located in an imaging database) [99]. The only way to incorporate both information is to create a model that can communicate between both systems which would be difficult to generalize when there is significant variability in both EHRs and radiographic analysis systems. It is likely that these limitations and validity threats will be addressed in future research and facilitate the eventual clinical translation of AI models in various aspects of AIS management.

3.3.3 Epilepsy

There are broad applications for machine learning techniques related to medical decision-making and prognostication in patients with epilepsy. One group of researchers created a multi-layer perceptron neural network (MLPNN) based on the analysis of seven clinical features (age of seizure onset, presentation to clinic >1 year after disease onset, anti-epileptic drug (AED) use at the time of seeing their first epilepsy provider, and/or history of febrile seizures, systemic/metabolic disease, cerebrovascular disease, and intracranial tumor). This neural network was able to predict seizure freedom with anti-epileptic drugs with 91.1% accuracy and medically refractory epilepsy with 93% accuracy. Another study was able to predict medically refractory epilepsy (defined as failing at least two AEDs at therapeutic doses) at the time of initiating their first AED [100]. Such models can help guide patients to consider non-pharmacological options like neuromodulation (vagal nerve stimulator, deep brain stimulation) and surgical resection earlier than currently seen in practice.

There are over 30 AEDs available for the management of epilepsy and despite the increased number of available therapies over the years, there has not been a significant reduction in patients with medical refractory epilepsy [101]. Selection of AEDs continues to be a game of trial and error and relying on the physician to balance medication efficacy with side effect profile and patient comorbidities. Despite the utilization of several machine learning models (multilayered perceptron, logistic regression, support vector machine, random forests, extreme gradient boosting [XGBoost], and transformer model), none have yet to perform in a way to suggest that AI models are ready to be integrated into the clinical decision making of AED selection [102].

3.3.4 Neurodegenerative disease

Alzheimer’s disease (AD) is the most common neurodegenerative disease in the world and treatments that have would a significant benefit in disease progression have remained elusive [103, 104]. AI-related research in AD has focused on four main aspects: (1) the diagnosis of AD, (2) quantifying disease severity, (3) differentiating AD from other neurodegenerative diseases, (4) predicting conversion rate from AD from MCI [105]. Some models have been able to identify patients with AD from health controls (HC) based solely on MRI data while another has been able to use this single imaging modality to quantify AD severity [106, 107, 108]. Fluorodeoxyglucose (FDG) positron emission tomography (PET)-based models have also yielded high specificity and moderate sensitivity in differentiating between AD and cognitively unimpaired patients [109]. A deep neural network model that combines both neuroimaging and clinical data can differentiate between AD, MCI, and healthy controls with an impressive degree of accuracy (98.55%), sensitivity (98.79%), and specificity (99.31%) [110].

In the future, such models may help screen patients who are susceptible to conversion from HC to MCI, or MCI to AD. This will be particularly helpful for screening for future AD-related treatments as many of the current pharmacological studies being conducted in this realm are enrolling patients who are either phenotypically normal or have MCI with a high risk for progressing to AD in the future [111]. The theory behind this is that the neuropathological processes that cause AD occur for years before the first clinical symptoms [112]. Treating patients when the first pathologic changes occur and before the first phenotypical symptoms present may prevent disease progression and improve quality of life.

Advertisement

4. Future implications of artificial intelligence in job security in medicine

With the ongoing incorporation and evolution of AI in the field of medicine, there have also been apprehensions among some physicians regarding its ability to replace them or adversely impact their labor market [57]. The capability of AI to collect vast amounts of data from databases will change the day-to-day tasks of certain specialties. Moreover, its data-processing ability enables it to detect potential drug interactions, contraindications, and possible matches to clinical trials that might otherwise have escaped human detection [42, 113]. Given the emergence of AI, physicians’ responsibilities will bend toward more cognitive aspects, such as decision-making on intricate cases, discerning ethical details, and building rapport through empathy [42]. Physicians who resist these changes are likely to fall behind in their field of medicine [42, 57].

Although it is difficult to predict what will happen in the future regarding the role of AI, it is unlikely that it will take full autonomy in the short term [42, 114]. One of the pitfalls of AI that supports this claim is its lack of empathy. This is rather crucial considering the vital role empathy plays in physician-patient interaction [42, 115], and studies showing that patients often choose what aspect of their medical history to share with their physicians according to the level of empathy they perceive from their care provider. Their compliance with the recommended treatments and advice also indirectly depended on empathy and other patient-physician interaction factors [115]. Although there has been some work in the area of artificial empathy imitating human interactions, such as in the case of humanoid AI robots, it is important to recognize that AI is not human; hence, the response it generates is simulated. When it comes to a patient in genuine need of care, there is an ethical dilemma regarding whether simulated empathy is manipulative and deceptive [115].

Furthermore, AI has a recorded history of false data generation and bias [43, 57]. Deployed into real-life clinical decision-making, these inaccuracies have the potential to lead to fatal errors. In an event, these medical errors take place, the responsible party to take accountability remains an area of debate [42, 114]. Studies indicate a common viewpoint among patients that physicians should stay in control of AI-driven clinical decisions and, thereby, assume responsibility for these circumstances [53]. Furthermore, the complex nature of various medical conditions makes a one-size-fits-all approach infeasible. As subtle intricacies may not be captured in prior experience and hence not integrated into training data, implementing a plain algorithmic approach to medicine is impractical [42, 114]. This, coupled with the loss of important aspects of physical examinations limits AI from playing the role of a physician [47, 58].

There is considerable doubt about AI’s potential to exclusively take over the medical world. Commonly symbolized as a mirror, AI can meticulously process, reflect and identify valuable trends within the data that it is fed [42, 43]. However, across liability, inaccurate diagnosis, data shortfalls, and narrow use cases, AI’s elementary stage of the status quo makes it highly unlikely to outperform clinicians in most situations. However, should physicians hold a collaborative stance and perceive AI as an ally rather than a rival meant to stand on its own and compete for their jobs, there is massive potential for improved outcomes and efficiency [57]. Emphasis should be on utilizing AI’s vast and effective indexing to find nuanced specifics in medical literature, process huge amounts of data, and predictive modeling to help physicians stay well-rounded in their medical domain [116].

Physician input is invaluable, as their involvement is crucial in mitigating the feeding of false information and biased data to AI models thereby ensuring integrity and accuracy. Physicians should also advocate for the incorporation of ethical guidelines and HIPPA compliance within AI tools, with a particular emphasis on echoing the privacy concerns of their patients [117]. This emphasis on ethical applications of AI should extend to AI and the technological literacy of both older physicians and medical students alike. There should be a consensus for AI software developers to make it unlikely for AI to take full autonomy over physicians in the immediate future.

Advertisement

5. Conclusions

AI is a rapidly developing entity in the medical sector. Despite the current and future improvements, it may bring to those practicing dermatology and neurology, it is set to also bring discussion on privacy issues, laws and regulations, incorrect AI machine outputs, lack of research, and public trust. Nevertheless, AI machinery has the potential to change the roles medical providers take, pushing them to increase their emphasis on patient-practitioner communication, complex decision-making, and creative tasks.

It may take many more generations to mold AI into something the majority of medical field would feel comfortable using and relying on. By fostering a willingness to stay well versed in the newest technology, a readiness to tackle challenges that arise within the scope of AI, and an initiative to engage in innovation and development of AI, physicians and advanced practitioner should seize the opportunity to remain leaders in their profession.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

Advertisement

Abbreviations

AI

artificial intelligence

ML

machine learning

DL

deep learning

EEG

electroencephalogram

ICU

intensive care unit

TBP

total body photography

CNN

convolutional neural networks

RCM

reflectance confocal microscopy

PCP

primary care physician

SK

seborrheic keratosis

BCC

basal cell carcinoma

CAD

computer aided diagnosis

CDS

computerized decision support

DWI

diffuse weighted imaging

CT

computer tomography

MRI

magnetic resonance imaging

TUH

temple university hospital

SCORE – AI

standardized computer-based organized reporting of EEG - artificial intelligence

HFO

high frequency oscillations

IED

interictal epileptiform discharges

PAC

phase amplitude coupling

MCI

mild cognitive impairment

AD

Alzheimer’s disease

AIS

acute ischemic stroke

HER

health electronic record

BANN

Bayesian artificial neural network

MLPNN

multi-layer perceptron neural network

AED

anti-epileptic drug

HC

health control

PET

positron emission tomography

FDG

fluorodeoxyglucose

References

  1. 1. Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism. 2017;69S:S36-S402
  2. 2. Kulikowski CA. An opening chapter of the first generation of artificial intelligence in medicine: The first Rutgers AIM workshop, June 1975. Yearbook of Medical Informatics. 2015;10(01):227-233
  3. 3. Shortliffe EH, Axline SG, Buchanan BG, Merigan TC, Cohen SN. An artificial intelligence program to advise physicians regarding antimicrobial therapy. Computers and Biomedical Research. 1973;6(06):544-560
  4. 4. Erickson BJ, Korfiatis P, Akkus Z, Kline TL. Machine learning for medical imaging. Radiographics. 2017;37(2):505-515. DOI: 10.1148/rg.2017160130
  5. 5. Hogarty DT, Mackey DA, Hewitt AW. Current state and future prospects of artificial intelligence in ophthalmology: A review. Clinical & Experimental Ophthalmology. 2019;47(1):128-139. DOI: 10.1111/ceo.13381
  6. 6. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444
  7. 7. Narindrarangkura P, Kim MS, Boren SA. A scoping review of artificial intelligence algorithms in clinical decision support systems for internal medicine subspecialties. ACI Open. 2021;5(2):e67-e79
  8. 8. Alther M, Reddy CK. Clinical Decision Support Systems. Boca Raton, FL: CRC Press; 2015
  9. 9. Agency for Healthcare Research and Quality. Clinical Decision Support. Rockville, MD, USA: Agency for Healthcare Research and Quality; 2019. Available from: http://www.ahrq.gov/cpi/about/otherwebsites/clinical-decisionsupport/index.html [Accessed: 11 February, 2021]
  10. 10. Ercal F, Chawla A, Stoecker WV, Lee HC, Moss RH. Neural network diagnosis of malignant melanoma from color images. IEEE Transactions on Bio-Medical Engineering. 1994;41(9):837-845
  11. 11. Witten IH, Frank E, Hall MA, Pal CJ. Chapter 10, Deep learning. In: Data Mining. 4th ed. Amsterdam: Morgan Kaufmann; 2017. pp. 417-466
  12. 12. Hekler A et al. Superior skin cancer classification by the combination of human and artificial intelligence. European Journal of Cancer. 2019;120:114-121
  13. 13. Young AT, Vora NB, Cortez J, et al. The role of technology in melanoma screening and diagnosis. Pigment Cell & Melanoma Research. 2021;34(2):288-300
  14. 14. Winkler JK, Blum A, Kommoss K, et al. Assessment of diagnostic performance of dermatologists cooperating with a convolutional neural network in a prospective clinical study: Human with machine. JAMA Dermatology. 2023;159(6):621-627
  15. 15. Li Z et al. Artificial intelligence in dermatology image analysis: Current developments and future trends. Journal of Clinical Medicine. 2022;11(22):6826
  16. 16. Level Ex. Level Ex Expands into Dermatology with launch of Top Derm. Level Ex. 2021. Available from: https://www.levelex.com/press/level-ex-r-expands-into-dermatology-with-launch-of-top-derm/ [Accessed: 1 June, 2024]
  17. 17. Monheit G et al. The performance of MelaFind: A prospective multicenter study. Archives of Dermatology. 2011;147(2):188-194
  18. 18. Freeman K et al. Algorithm based smartphone apps to assess risk of skin cancer in adults: Systematic review of diagnostic accuracy studies. BMJ. 2020;368:m127
  19. 19. Krizhevsky A, Sutskever I, Hinton GE. ImageNet Classification with Deep Convolutional Neural Networks. Available from: http://code.google.com/p/cuda-convnet/
  20. 20. Esteva A et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-118
  21. 21. Gao W et al. The design and application of an automated microscope developed based on deep learning for fungal detection in dermatology. Mycoses. 2021;64(3):245-251
  22. 22. Malciu AM, Lupu M, Voiculescu VM. Artificial intelligence-based approaches to reflectance confocal microscopy image analysis in dermatology. Journal of Clinical Medicine. 2022;11(2):429
  23. 23. Majidian M, Tejani I, Jarmain T, Kellett L, Moy R. Artificial intelligence in the evaluation of telemedicine dermatology patients. Journal of Drugs in Dermatology. JDD. 2022;21(2):191-194
  24. 24. Vodrahalli K et al. TrueImage: A machine learning algorithm to improve the quality of telehealth photos. Pacific Symposium on Biocomputing. 2021;26:220-231
  25. 25. Xie P et al. Interpretable classification from skin cancer histology slides using deep learning: A retrospective multicenter study. arXiv preprint. 2019. arXiv:1904.06156
  26. 26. Cruz-Roa AA et al. A deep learning architecture for image representation, visual interpretability and automated basal-cell carcinoma cancer detection. Medical Image Computing and Computer Assisted Intervention. 2013;16(Pt 2):403-410
  27. 27. Ianni JD et al. Tailored for real-world: A whole slide image classification system validated on uncurated multi-site data emulating the prospective pathology workload. Scientific Reports. 2020;10(1):3217
  28. 28. Civaner MM et al. Artificial intelligence in medical education: A cross-sectional needs assessment. BMC Medical Education. 2022;22(1):772
  29. 29. Wu S et al. Deep learning in clinical natural language processing: A methodical review. Journal of the American Medical Informatics Association. 2020;27(3):457-470
  30. 30. Ramachandran V et al. Revolutionizing dermatology residency: Artificial intelligence for knowledge and clinical milestones assessment. Clinical and Experimental Dermatology. 2023;49:732-733
  31. 31. Gomolin A et al. Artificial intelligence applications in dermatology: Where do we stand? Frontiers in Medicine (Lausanne). 2020;7:100
  32. 32. Du‐Harpur X et al. What is AI? Applications of artificial intelligence to dermatology. British Journal of Dermatology. 2020;183(3):423-430
  33. 33. Artificial Intelligence in Health Care: Benefits and Challenges of Technologies to Augment Patient Care, GAO-21-7SP. 2020. Available from: http://www.gao.gov/products/GAO-21-7SP [Accessed: 30 November, 2020]
  34. 34. Kovarik C. Development of high-quality artificial intelligence in dermatology: Guidelines, pitfalls, and potential. JID Innovations. 2022;2(6):100157. DOI: 10.1016/j.xjidi.2022.100157
  35. 35. Shah M, Sureja N. A comprehensive review of bias in deep learning models: Methods, impacts, and future directions. Archives of Computational Methods in Engineering. 2024:1-13
  36. 36. Rosen H. Top Five Opportunities and Challenges of AI in Healthcare. Forbes. 2023. Available from: https://www.forbes.com/sites/forbesbusinesscouncil/2023/02/07/top-fiveopportunities-and-challenges-of-ai-in-healthcare/?sh=89f15a128056
  37. 37. Lee M. Who is Liable for Incorrect AI Diagnosis that Leads to the Wrong Treatment or Injury? Attorney at Law. 2023. Available from: https://attorneyatlawmagazine.com/legal/legal-trends/who-is-liable-for-incorrectai-diagnosis-that-leads-to-the-wrong-treatment-or-injury
  38. 38. Sankey P. AI Medical Diagnosis and Liability When Something Goes Wrong. Enable Law. 2021. Available from: https://www.enablelaw.com/news/expert-opinion/ai-medical-diagnosis-and liability-when-something-goes-wrong/ [Accessed: 25 December, 2023]
  39. 39. Ryan M. In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics. 2020;26(5):2749-2767
  40. 40. Siau K, Wang W. Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal. 2018;31:47-53
  41. 41. Ploug T, Holm S. The right to refuse diagnostics and treatment planning by artificial intelligence. Medicine, Health Care and Philosophy. 2020;23(1):107-114
  42. 42. Ahuja AS. The impact of artificial intelligence in medicine on the future role of the physician. PeerJ. 2019;7:e7702
  43. 43. Liopyris K et al. Artificial intelligence in dermatology: Challenges and perspectives. Dermatologic Therapy (Heidelb). 2022;12(12):2637-2651
  44. 44. Derevianko A et al. The use of artificial intelligence (AI) in the radiology field: What is the state of doctor-patient communication in cancer diagnosis? Cancers (Basel). 2023;15(2):470
  45. 45. Goetz CM et al. Perceptions of virtual primary care physicians: A focus group study of medical and data science graduate students. PLoS One. 2020;15(12):e0243641
  46. 46. Yakar D et al. Do people favor artificial intelligence over physicians? A survey among the general population and their view on artificial intelligence in medicine. Value in Health. 2022;25(3):374-381
  47. 47. Nelson CA et al. Patient perspectives on the use of artificial intelligence for skin cancer screening: A qualitative study. JAMA Dermatology. 2020;156(5):501-512
  48. 48. Esmaeilzadeh P, Mirzaei T, Dharanikota S. Patients' perceptions toward human-artificial intelligence interaction in health care: Experimental study. Journal of Medical Internet Research. 2021;23(11):e25856
  49. 49. Jutzi TB et al. Artificial intelligence in skin cancer diagnostics: The patients' perspective. Frontiers in Medicine (Lausanne). 2020;7:233
  50. 50. van der Zander QEW et al. Artificial intelligence in (gastrointestinal) healthcare: patients' and physicians' perspectives. Scientific Reports. 2022;12(1):16779
  51. 51. Frosch DL et al. Authoritarian physicians and patients' fear of being labeled 'difficult' among key obstacles to shared decision making. Health Affairs (Millwood). 2012;31(5):1030-1038
  52. 52. Neuwirth ZE. Reclaiming the lost meanings of medicine. The Medical Journal of Australia. 2002;176(2):77-79
  53. 53. Fritsch SJ et al. Attitudes and perception of artificial intelligence in healthcare: A cross-sectional survey among patients. Digital Health. 2022;8:20552076221116772
  54. 54. Bodie GD, Dutta MJ. Understanding health literacy for strategic health marketing: eHealth literacy, health disparities, and the digital divide. Health Marketing Quarterly. 2008;25(1-2):175-203
  55. 55. Longoni C, Bonezzi A, Morewedge CK. Resistance to medical artificial intelligence. Journal of Consumer Research. 2019;46(4):629-650
  56. 56. Clynch N, Kellett J. Medical documentation: Part of the solution, or part of the problem? A narrative review of the literature on the time spent on and value of medical documentation. International Journal of Medical Informatics. 2015;84(4):221-228
  57. 57. Chen M et al. Acceptance of clinical artificial intelligence among physicians and medical students: A systematic review with cross-sectional survey. Frontiers in Medicine (Lausanne). 2022;9:990604
  58. 58. Nelson CA et al. Dermatologists' perspectives on artificial intelligence and augmented intelligence - a cross-sectional survey. JAMA Dermatology. 2021;157(7):871-874
  59. 59. Consorti F et al. Efficacy of virtual patients in medical education: A meta-analysis of randomized studies. Computers & Education. 2012;59(3):1001-1008
  60. 60. Gille F, Jobin A, Ienca M. What we talk about when we talk about trust: Theory of trust for AI in healthcare. Intelligence-Based Medicine. 2020;1-2:100001
  61. 61. Castellino RA. Computer aided detection (CAD): An overview. Cancer Imaging. 2005;5(1):17-19
  62. 62. Chan HP et al. Image feature analysis and computer-aided diagnosis in digital radiography. I. Automated detection of microcalcifications in mammography. Medical Physics. 1987;14(4):538-548
  63. 63. Lin DJ et al. Artificial intelligence for MR image reconstruction: An overview for clinicians. Journal of Magnetic Resonance Imaging. 2021;53(4):1015-1028
  64. 64. Zhao C et al. Applications of a deep learning method for anti-aliasing and super-resolution in MRI. Magnetic Resonance Imaging. 2019;64:132-141
  65. 65. Kobler E et al. Variational deep learning for low-dose computed tomography. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Calgary, Alberta, Canada: IEEE; 2018
  66. 66. Palmer E et al. Head and neck cancer patient positioning using synthetic CT data in MRI-only radiation therapy. Journal of Applied Clinical Medical Physics. 2022;23(4):e13525
  67. 67. Theaud G et al. DORIS: A diffusion MRI-based 10 tissue class deep learning segmentation algorithm tailored to improve anatomically-constrained tractography. Frontiers in Neuroimaging. 2022;1:917806
  68. 68. An X et al. Intracranial aneurysm rupture risk estimation with multidimensional feature fusion. Frontiers in Neuroscience. 2022;16:813056
  69. 69. Yang J et al. Deep learning for detecting cerebral aneurysms with CT angiography. Radiology. 2021;298(1):155-163
  70. 70. Zeng Y et al. Automatic diagnosis based on spatial information fusion feature for intracranial aneurysm. IEEE Transactions on Medical Imaging. 2020;39(5):1448-1458
  71. 71. Stember JN, Young RJ, Shalu H. Direct evaluation of treatment response in brain metastatic disease with deep neuroevolution. Journal of Digital Imaging. 2023;36(2):536-546
  72. 72. Stember J, Shalu H. Deep reinforcement learning classification of brain tumors on MRI. In: Innovation in Medicine and Healthcare. Singapore: Springer Nature Singapore; 2022
  73. 73. İnce R, Adanır SS, Sevmez F. The inventor of electroencephalography (EEG): Hans Berger (1873-1941). Child's Nervous System. 2021;37(9):2723-2724
  74. 74. Gotman J. Automatic recognition of epileptic seizures in the EEG. Electroencephalography and Clinical Neurophysiology. 1982;54(5):530-540. DOI: 10.1016/0013-4694(82)90038-4
  75. 75. Goldberger AL et al. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation. 2000;101(23):E215-E220
  76. 76. Schalk G et al. BCI2000: A general-purpose brain-computer interface (BCI) system. IEEE Transactions on Biomedical Engineering. 2004;51(6):1034-1043
  77. 77. Selvaraj TG et al. EEG database of seizure disorders for experts and application developers. Clinical EEG and Neuroscience. 2014;45(4):304-309
  78. 78. Obeid I, Picone J. The temple university hospital EEG data corpus. Frontiers in Neuroscience. 2016;10:196
  79. 79. Abbasi B, Goldenholz DM. Machine learning applications in epilepsy. Epilepsia. 2019;60(10):2037-2047
  80. 80. Lodder SS. Computer assisted interpretation of the human EEG: improving diagnostic efficiency and consistency in clinical reviews [thesis]. Netherlands: University of Twente; 2014
  81. 81. Kural MA et al. Accurate identification of EEG recordings with interictal epileptiform discharges using a hybrid approach: Artificial intelligence supervised by human experts. Epilepsia. 2022;63(5):1064-1073
  82. 82. Tveit J et al. Automated interpretation of clinical electroencephalograms using artificial intelligence. JAMA Neurology. 2023;80(8):805-812
  83. 83. Varatharajah Y et al. Integrating artificial intelligence with real-time intracranial EEG monitoring to automate interictal identification of seizure onset zones in focal epilepsy. Journal of Neural Engineering. 2018;15(4):046035
  84. 84. Grigsby J et al. Predicting outcome of anterior temporal lobectomy using simulated neural networks. Epilepsia. 1998;39(1):61-66
  85. 85. Kobayashi K et al. Artificial intelligence-based detection of epileptic discharges from pediatric scalp electroencephalograms: A pilot study. Acta Medica Okayama. 2022;76(6):617-624
  86. 86. Acharya UR, Hagiwara Y, Adeli H. Automated seizure prediction. Epilepsy & Behavior. 2018;88:251-261
  87. 87. Morabito FC, Ieracitano C, Mammone N. An explainable artificial intelligence approach to study MCI to AD conversion via HD-EEG processing. Clinical EEG and Neuroscience. 2023;54(1):51-60
  88. 88. Occupational Employment and Wages, May 2022, 29-1217 Neurologists. 2023. Available from: https://www.bls.gov/oes/current/oes291217.htm#nat [Accessed: 12 December, 2023]
  89. 89. UCNS Fellowship Directory. 2023. Available from: https://www.ucns.org/Online/Online/Fellowship_Directory.aspx?hkey=9d5245c6-cb3e-4788-b0ef-b2f74ed7dff5 [Accessed: 12 December, 2023]
  90. 90. Taking a Subspecialty Certification Examination. 2023. Available from: https://abpn.org/become-certified/taking-a-subspecialty-exam/ [Accessed: 28 December, 2023]
  91. 91. Rebitzer JB, Rege M, Shepard C. Influence, information overload, and information technology in health care. Advances in Health Economics and Health Services Research. 2008;19:43-69
  92. 92. Winters B et al. Diagnostic errors in the intensive care unit: A systematic review of autopsy studies. BMJ Quality and Safety. 2012;21(11):894-902
  93. 93. Medic G et al. Evidence-based clinical decision support systems for the prediction and detection of three disease states in critical care: A systematic literature review. F1000Res. 2019;8:1728
  94. 94. Donald R et al. Forewarning of hypotensive events using a Bayesian artificial neural network in neurocritical care. Journal of Clinical Monitoring and Computing. 2019;33(1):39-51
  95. 95. Guiza F et al. Early detection of increased intracranial pressure episodes in traumatic brain injury: External validation in an adult and in a pediatric cohort. Critical Care Medicine. 2017;45(3):e316-e320
  96. 96. Saver JL. Time is brain—quantified. Stroke. 2006;37(1):263-266
  97. 97. Demeestere J et al. Review of perfusion imaging in acute ischemic stroke: From time to tissue. Stroke. 2020;51(3):1017-1024
  98. 98. Akay EMZ et al. Artificial intelligence for clinical decision support in acute ischemic stroke: A systematic review. Stroke. 2023;54(6):1505-1516
  99. 99. Bravata DM, Ranta A. Artificial intelligence in clinical decisions support for stroke: Balancing opportunity with caution. Stroke. 2023;54(6):1517-1518
  100. 100. An S et al. Predicting drug-resistant epilepsy—A machine learning approach based on administrative claims data. Epilepsy & Behavior. 2018;89:118-125
  101. 101. French JA. Refractory epilepsy: Clinical overview. Epilepsia. 2007;48(Suppl. 1):3-7
  102. 102. Hakeem H et al. Development and validation of a deep learning model for predicting treatment response in patients with newly diagnosed epilepsy. JAMA Neurology. 2022;79(10):986-996
  103. 103. Qiu C, Kivipelto M, von Strauss E. Epidemiology of Alzheimer's disease: Occurrence, determinants, and strategies toward intervention. Dialogues in Clinical Neuroscience. 2009;11(2):111-128
  104. 104. Yiannopoulou KG et al. Reasons for failed trials of disease-modifying treatments for Alzheimer disease and their contribution in recent research. Biomedicine. 2019;7(4):97
  105. 105. Tăuţan A-M, Ionescu B, Santarnecchi E. Artificial intelligence in neurodegenerative diseases: A review of available tools with a focus on machine learning techniques. Artificial Intelligence in Medicine. 2021;117:102081
  106. 106. Sarraf S, Tofighi G. Classification of Alzheimer’s disease structural MRI data by deep learning convolutional neural networks. arXiv preprint. 2016. arXiv:1607.06583
  107. 107. Islam J, Zhang Y. A Novel Deep Learning Based Multi-Class Classification Method for Alzheimer’s Disease Detection Using Brain MRI Data. Cham: Springer International Publishing; 2017
  108. 108. Mahmood R, Ghimire B. Automatic detection and classification of Alzheimer’s disease from MRI scans using principal component analysis and artificial neural networks. In: 2013 20th International Conference on Systems, Signals and Image Processing (IWSSIP). Bucharest, Romania: IEEE; 2013
  109. 109. Singh S et al. Deep learning based classification of FDG-PET data for Alzheimer’s disease categories. In: Proceedings of SPIE - the International Society for Optical Engineering. Vol. 10572. San Andras Island, Columbia: SPIE; 2017
  110. 110. Jabason E, Ahmad MO, Swamy MNS. Deep structural and clinical feature learning for semi-supervised multiclass prediction of Alzheimer’s disease. In: 2018 IEEE 61st International Midwest Symposium on Circuits and Systems (MWSCAS). Windsor, ON, Canada: IEEE; 2018
  111. 111. Yiannopoulou KG, Papageorgiou SG. Current and future treatments in Alzheimer disease: An update. Journal of Central Nervous System Disease. 2020;12:1179573520907397
  112. 112. Scheltens P et al. Alzheimer's disease. Lancet. 2016;388(10043):505-517
  113. 113. Jin Q et al. Matching patients to clinical trials with large language models. ArXiv. 2023
  114. 114. Shuaib A, Arian H, Shuaib A. The increasing role of artificial intelligence in health care: Will robots replace doctors in the future? International Journal of General Medicine. 2020;13:891-896
  115. 115. Montemayor C, Halpern J, Fairweather A. In principle obstacles for empathic AI: Why we can't replace human empathy in healthcare. AI & Society. 2022;37(4):1353-1359
  116. 116. Almarie B et al. Editorial - the use of large language models in science: Opportunities and challenges. Principles and Practice of Clinical Research. 2023;9(1):1-4
  117. 117. Harrer S. Attention is not all you need: The complicated case of ethically using large language models in healthcare and medicine. eBioMedicine. 2023;90:104512

Written By

Nadia Abidi, Zehara Abidi, Brian Hanrahan, Mini Parampreet Kaur, Yemesrach Kerego, Anna Ng Pellegrino and Venkatraman Thulasi

Submitted: 06 January 2024 Reviewed: 25 June 2024 Published: 26 July 2024