Open access peer-reviewed chapter - ONLINE FIRST

Ethics of Artificial Intelligence: Implications for Primary Care and Family Medicine Residency Programs

Written By

Thomas Wojda, Carlie Hoffman, Kevin Kindler, Amishi Desai and Shyam Visweswaran

Submitted: 13 February 2024 Reviewed: 27 March 2024 Published: 29 April 2024

DOI: 10.5772/intechopen.114907

Artificial Intelligence in Medicine and Surgery - An Exploration of Current Trends, Potential Opportunities, and Evolving Threats - Volume 2 IntechOpen
Artificial Intelligence in Medicine and Surgery - An Exploration ... Edited by Stanislaw P. Stawicki

From the Edited Volume

Artificial Intelligence in Medicine and Surgery - An Exploration of Current Trends, Potential Opportunities, and Evolving Threats - Volume 2 [Working Title]

Dr. Stanislaw P. Stawicki

Chapter metrics overview

13 Chapter Downloads

View Full Metrics

Abstract

This chapter explores the ethical implications and successful implementations of artificial intelligence (AI) in primary care and family medicine residency programs. It begins by highlighting the transformative potential of AI in revolutionizing decision-making processes and enhancing proactive care in healthcare settings. Ethical considerations for healthcare providers encompass various facets, including legal implications, healthcare recipient confidentiality, autonomy, as well as the changing responsibilities of doctors amidst the age of artificial intelligence. The impacts on healthcare professionals and training programs emphasize incorporation of AI training into syllabi and the significance of interdisciplinary collaboration. Case studies showcase successful AI implementations, such as PainChek® for pain assessment and IDx-DR for diabetic ocular pathologies detection, while also addressing ethical dilemmas and strategies for mitigation. Future perspectives advocate for tailor-made ethical guidelines, education and training programs, and collaborative efforts to ensure responsible AI integration while upholding ethical standards and patient-centric care. Overall, the chapter emphasizes the critical need for ethical frameworks and collaborative approaches to harness AI’s potential in primary care effectively.

Keywords

  • artificial intelligence
  • machine learning
  • primary care
  • family medicine
  • ethics

1. Introduction

Progressions in artificial intelligence (AI), encompassing machine learning (ML), have paved the way for a paradigm shift in medicine, significantly altering the landscape. Researchers have extensively studied this transformative evolution, showcasing how AI can mimic human cognitive abilities and interpret vast amounts of health data, revolutionizing medical decision-making [1, 2]. Moreover, many studies highlight AI’s burgeoning presence across medical disciplines, with successful integration in radiology, ophthalmology, cardiology, and others [3, 4, 5, 6, 7].

In the sphere of primary care and family medicine residency programs, AI holds immense promise [8, 9]. These fields emphasize patient-centered care, long-term relationships, and the use of comprehensive electronic health records (EHRs) as critical components. Within this framework, AI emerges as a tool capable of enhancing proactive care, offering decision support, streamlining operations, and alleviating administrative burdens [9].

However, alongside these promising prospects, ethical concerns loom large in the integration of AI into primary care and family medicine residency programs. Researchers underline the ethical quandaries inherent to the creation and implementation of AI such as algorithmic bias, patient privacy, impact on patient-provider relationships, healthcare equity, and clinical decision-making [10, 11, 12]. Stressing the necessity for a robust ethical framework, multiple studies advocate for responsible AI implementation guided by comprehensive ethical guidelines [1, 2, 8, 9, 13, 14, 15, 16, 17, 18, 19, 20, 21].

This chapter lays the groundwork for exploring the ethical ramifications of AI in primary care and family medicine residency programs. Drawing on insights from diverse research articles, it emphasizes the transformative potential of these technologies while underscoring the critical need for ethical guidelines and frameworks to ensure their responsible incorporation into healthcare.

Advertisement

2. Methodology

Digital queries were conducted systematically across academic databases and research repositories, utilizing keywords such as “Artificial Intelligence,” “Primary Care,” “Machine Learning,” “Family Medicine,” “Medical Education,” and “Ethics.” Additionally, citation analysis within the main studies augmented the search process. The search spanned a period from the earliest available publications to the most recent, ensuring a comprehensive review of the literature.

While a formal approximate total number of search results was not documented, the search strategy focused on identifying relevant literature pertaining to the integration of artificial intelligence (AI) in primary care, family medicine, medical education, and associated ethical considerations. The inclusion criteria encompassed studies that addressed the intersection of AI with primary care and family medicine, including applications, challenges, benefits, and ethical implications. Conversely, articles that did not align with these criteria were excluded.

One reviewer meticulously assessed the headings, summaries, and complete documents of the retrieved articles to ascertain their suitability for inclusion. The final selection of references reflects a comprehensive compilation of high-quality literature that contributed significantly to the discourse on AI integration in primary care and family medicine, as well as the ethical considerations surrounding its implementation.

This methodology prioritized the quality and relevance of selected articles over the sheer volume of search results, adhering to established best practices in literature review methodology. Each included reference underwent thorough evaluation to ensure its credibility and contribution to the research objectives, thereby providing a robust foundation for the ensuing analysis and discussion.

Advertisement

3. Ethical considerations for healthcare providers

In navigating the complex landscape of healthcare technology, ethical considerations surrounding the integration of AI are indispensable. This exploration encompasses various facets shown in Figure 1 [12].

Figure 1.

Various facets of AI in healthcare.

The legal implications of AI for medical data management are complex. Historically, the confidentiality and safeguarding of secured medical data depended on patients providing explicit consent or submitting formal requests for access to their medical records. More recently, regulations have disrupted the notion that the health system owns the data of their patients, separating access rights from ownership [22]. EHRs, with their vast and easily accessible information, represent a paradigm shift away from traditional paper records. Security breaches, can have far-reaching consequences beyond compromising an individual’s confidentiality, but also potentially crippling entire health systems [23, 24]. Sharing medical information presents challenges that necessitate the creation of cross-jurisdictional precedents [25]. Consequently, the lack of uniform ethical standards for health information technology (IT) professionals raises cross-jurisdictional concerns [26].

Beyond the IT professionals, medical students and healthcare practitioners may improperly monitor patients without consent. While tracking a patient’s health trajectory over time is promoted as a beneficial learning exercise, a significant percentage of students fail to distinguish between educational pursuits and curiosity-based inquiries, resulting in illegal behavior [27, 28]. Medical schools are urged to incorporate meaningful and judicious engagement with patients’ data into informatics and EHR curricula, potentially utilizing registries of consenting patients [27, 29].

In addition to consideration of the conditions of data access and use, it is imperative to address the privacy safeguards of patient information and how they align with local, state, and federal data protection laws [30]. EHRs may require the de-identification of data prior to sharing personal health information through health information exchanges to contribute to valuable studies [31]. The duty to maintain privacy and confidentiality conflicts with potential public benefit, thus challenging current concepts of privacy and confidentiality within the “easy rescue” duty framework, particularly in low-risk research situations [32]. Altruism, supererogation, and the avoidance of “free-riding” are also important concepts to consider [29, 30, 33, 34, 35].

The discussion extends to the potential moral imperative of adopting AI in medicine. Patients using AI as a prelude to medical consultations may align with the autonomy-promoting rationale, wherein technology augments human functioning to improve task performance. While AI can augment human capacity, caution is warranted to avoid automation bias and complacency, whereby a human readily accepts the recommendation of AI without critical review, which can result in errors in diagnosis and treatment [36]. Ethical considerations in AI innovation necessitate careful examination of the impact on patient autonomy, consent, and the potential risks associated with technology adoption in medical decision-making [22, 26, 36, 37, 38].

In the ontological realm, AI’s conceptualization diverges from philosophical interpretations, emphasizing machine-processable semantics for information exchange [39]. Incorporation of AI in medicine introduces the belief in an “expert iDoctor,” challenging human capacities and transforming the role of physicians [40, 41]. Epistemologically, AI’s reliance on parallel learner and classifier algorithms raises concerns about biased training data, intelligibility issues, and discriminatory outcomes [42]. E-patients, relying on AI in healthcare, present challenges to traditional notions of privacy and confidentiality [36, 43, 44, 45, 46].

The intersection of techne (information analysis) and phrenos (judgment based on experience) in AI reshapes the role of physicians [12]. The widespread access to information disrupts the sole responsibility of physicians as information custodians, potentially leading to trust issues in medical opinions [47]. Trust erosion in medicine, exacerbated by online information, introduces complexities in attributing expertise, which is further complicated by automation bias and complacency in the human-AI relationship [48, 49]. The potential for technology to either amplify or mitigate social health disparities complicates the physician-patient dynamic [50].

The ethical landscape surrounding AI in healthcare is complex, encompassing ontology, epistemology, trust, expertise, and information access for both patients and healthcare professionals [51, 52]. Patient access to medical records and self-directed testing can introduce challenges to caring for patients, such as misinterpretation of tests results and unclear responsibility for any resulting harm [53]. The critical examination of unintended consequences emphasizes the need for ethical considerations to ensure optimal patient care [54].

The ethical complexities extend to user agreements for AI health apps such as chatbots which can create dilemmas in clinical decision-making [55]. Despite their widespread use, these apps pose significant ethical challenges, particularly regarding user agreements and informed consent [56]. Individuals often accept user agreements without fully understanding them, raising questions about the responsibility of app developers to ensure comprehensibility [55]. The complex interplay of evolving technology, user agreements, and ethical considerations necessitates careful scrutiny of conscientious incorporation of AI into medical care.

Algorithmic fairness and biases in AI must be carefully considered because they have the potential to influence the accuracy of diagnoses and treatments for specific subpopulations. Addressing these biases requires diversifying data collection and tailoring algorithms for different populations. Transparency in AI systems is essential for building trust among stakeholders, especially given the prevalence of “black-box” systems [11, 57]. For instance, IBM Watson for Oncology, designed to aid in treatment decisions, faced criticism for using limited, synthetic data in training, resulting in unsafe recommendations. Errors occurred during preliminary testing, and fortunately not with real patients, highlighting the challenge of translating AI from theory to clinical application. Critical questions about informed consent and autonomy must be addressed when incorporating AI, and patients must be educated on its complexities. However, the opacity of algorithms poses challenges for clinicians in fully interpreting AI-based recommendations.

As applications of AI becomes more commonplace in healthcare encounters, it is critical to balance the promises of technological advancement with the preservation of humanistic values, patient autonomy, and fairness in healthcare [36, 40, 58]. Physicians are encouraged to actively engage in the discourse, contributing to the ongoing dialog about the ethical implications of AI incorporation in medical treatment.

Advertisement

4. The impact of AI on healthcare professionals and training programs

4.1 Integration of AI education into family medicine residency programs

Incorporating AI into clinical instruction has profound implications for healthcare professionals and their training programs, spanning the cognitive, psychomotor, and affective domains (Figure 2) [59]. This section explores the multifaceted nature of the integration and its critical implications.

Figure 2.

Important domains for instruction of AI principles for healthcare providers.

To establish a foundational understanding of AI, a significant body of literature emphasizes the importance of providing healthcare providers with basic AI education [60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74]. Recommendations include integrating healthcare data science fundamentals such as big data and biomedical informatics into medical school and residency curricula and incorporating ethics, equity, diversity, and inclusion principles into data science curricula to enhance the ethical value of AI [75].

In the psychomotor domain, emphasis is placed on a physician’s ability to analyze data and assess the efficacy and accuracy of AI use [61, 65, 67, 68, 70, 73, 76]. Providers are encouraged to verify the clinical precision of algorithms, and effectively communicate AI results to patients, all while cultivating empathy and compassion [62, 67, 68, 77, 78].

Within the affective domain, the importance of healthcare providers maintaining a positive attitude toward using AI tools for improving patient outcomes is emphasized [63, 71, 73, 74, 77, 79]. Breaking stereotypes about AI and perceiving it as a tool to augment care rather than a threat are highlighted [74, 79]. Joint initiatives involving students from medical, computer science, and engineering disciplines are recommended to foster interdisciplinary understanding [73].

Encouraging successful implementation involves the promotion of cross-disciplinary cooperation [64, 72]. Operating within established regulatory structures is important as well [63, 64, 72, 77]. Barriers to implementation that must be addressed include hiring faculty with varying levels of AI proficiency and lack of frameworks for AI programming [61, 64, 70]. Efforts to seamlessly integrate AI education into primary care and family medicine residency programs must address these barriers and take a holistic approach [59]. To help providers along this path, guiding principles have been created (Figure 3) [18].

Figure 3.

Guiding principles for AI implementation into curricula for healthcare providers.

Recognizing that current regulatory frameworks may pose obstacles for AI education initiatives, a shift toward innovative regulatory approaches is vital [77]. Addressing faculty resistance and outdated teaching methods is crucial for developing a workforce that can adapt to rapid technological changes. Accrediting bodies must adopt new models to accommodate AI-enabled care while also encouraging a culture of innovation and lifelong learning [77, 79].

Successful AI deployment in residency education demands collaboration across diverse disciplines. Curriculum redesign should involve health executives, IT analysts, end-users, and educational leaders [80]. Collaboration should aim to educate how AI can address biases and reduce social inequalities in patient care. Foundational AI competencies and skills are deemed essential for healthcare providers, which will influence their future practice. AI education programs should equip physicians to both use AI and contribute to related policy decisions. Again, integrating AI fundamentals into medical school curricula, with a focus on ethical implications and limitations, cannot be ignored [67, 70, 79].

Considerations about patient-clinician interaction and strategies for improving care quality in a technology-enabled environment are crucial. Healthcare providers should cultivate human attributes such as altruism and compassion while concurrently distinguishing between verifiable and misleading data. Engaging patients in the AI adoption process ensures successful implementation [65, 79, 81].

The nuanced perspective on AI in primary care reveals a cautious yet optimistic outlook among stakeholders. Concerns and optimism coexist, emphasizing the need for AI tools tailored to the unique needs of healthcare practitioners [21]. As AI’s role has yet to be fully defined, the importance of determining its optimal integration into existing healthcare frameworks cannot be overstated. Challenges hindering seamless integration include inadequate infrastructure, regulatory constraints, varying proficiency levels, and hurdles in accessing healthcare data. Guiding principles emerged, emphasizing innovative regulatory approaches, interdisciplinary collaboration, competency-based education, and developing competencies that enable healthcare providers to discern credible information [9]. To address this educational gap, six competencies have been proposed (Figure 4) [18].

Figure 4.

Competencies proposed in the literature.

Efforts are underway to integrate AI education into family medicine residency programs, acknowledging the gap in tailored education. The proposed competencies offer a roadmap for all levels of learners in residency programs, including attending clinicians, faculty, residents, and medical students, emphasizing essential knowledge and skills for AI integration in primary care decision-making. The competencies are integral to a holistic approach, ensuring the responsible and patient-centered use of AI-based tools.

4.2 Influence of AI on healthcare providers

Predictive modeling with EHR data can currently surpass conventional approaches in predicting in-hospital mortality, unplanned readmission within 30 days, extended hospital stay, and discharge disposition [82]. Outpatient, healthcare organizations are partnering with technological companies to launch a new wave of population health solutions. BaseHealth’s AI aids Banner Health to predict the risk of unnecessary hospitalizations due to complications from 42 conditions across a population of 100,000 patients [13].

AI-powered virtual healthcare aids are being developed to offer direct health guidance to individuals experiencing common symptoms. This approach aims to improve primary care accessibility for more complex healthcare needs. Rather than AI replace healthcare providers for the treatment of certain conditions, there is confidence that AI assistance can seamlessly integrate into collaborative care frameworks and reduce workload [83].

Risk adjustment entails adjusting payments and benchmarks to account for the severity of illness. This enables Centers for Medicare & Medicaid Services (CMS) anticipate prospective expenditures, which aides physicians better understand the wellbeing of a populace [84]. Absence of standardized frameworks for adjusting panel size based on risk has prompted the exploration of potential AI solutions. At University of California San Francisco (UCSF), algorithms were trained on EHR data related to healthcare utilization, enabling the assessment of panel sizes in primary care [85]. Going forward, these models have the potential to ascertain the required number of support staff.

Healthcare enterprises are utilizing patient education facilitated by digital tools for the management of chronic conditions. Moreover, these initiatives are being seamlessly integrated into healthcare systems with the goal of reducing costs through fewer office and hospital visits. Importantly, automated text-based health coaching produced weight reduction results on par with face-to-face behavior modification programs [86].

Technological companies specializing in speech-to-text technology are working with hospital organizations to develop electronic documentation assistants. These innovative tools can actively listen to patient-physician conversations and generate clinical notes autonomously, which also helps to reduce physician burnout [87].

Diagnosis of disease through AI tools may surpass the abilities of physicians in various domains [88, 89, 90]. Moreover, AI may help in automating routine processes [13, 91]. Therefore, in regions with restricted availability of clinical specialists, these instruments may broaden care service offerings, reduce unnecessary referrals, enhance continuity of care, and augment the expertise of primary care physicians.

AI’s value in healthcare stems from its ability to support, rather than replace, the patient-physician relationship, which is an essential component of the fundamentally social enterprise of healthcare. Poorly implemented AI risks marginalizing humanity, while wise implementation can free up physicians’ cognitive load, allowing them to be better caregivers [54]. For instance, AI health coaches integrated into care teams can enhance patient engagement, whereas detached, one-size-fits-all coaches may alienate patients [86]. When a physician leads the AI team, both clinicians and the patients they care for find AI acceptable [92, 93, 94].

Ideally, AI should decrease physicians’ workloads by organizing relevant patient information and presenting them in an intuitive interface, thus protecting physicians from information overload [95]. AI can automate repetitive tasks, reducing hazards for physicians and allowing them to focus on personalized care and enhancing human healing. The primary concerns include whether AI will boost or undermine rapport with patients as well as modify physicians’ responsibilities. Ongoing research is crucial to discern AI’s impact on achieving the Quadruple Aim (Improving patient satisfaction, enhancing community health, decreasing expenses, and enhancing the professional lives of healthcare providers) in healthcare (Table 1) [13].

AI function areasKey points
Risk prediction and interventionPredictive AI using EHR data outperform traditional methods.
Population health managementAI aids in closing care gaps and enhancing quality payment programs.
Medical advice and triageAI-driven services expect a $27B market, aiding complex care access.
Risk-adjusted panelingAI determines panel sizes for better patient care and satisfaction.
Device integrationWearable device data, integrated with AI, aids in early diagnosis.
Digital health coachingAI-based coaching shows potential in chronic disease management.
Chart review and documentationAI-driven digital scribes reduce physician burnout from EHR tasks.
DiagnosticsAI outperforms in diagnosing diseases, expanding care access.
Practice managementAI automates clerical tasks, optimizes coding, and improves efficiency.
Augmenting patient-physician relationshipAI should support rather than replace human connections.

Table 1.

Potential artificial intelligence (AI) functions in primary care.

4.3 Ethical implications for healthcare providers

Community-based primary health care (CBPHC) is the foundation of healthcare systems, shaping public health policy and serving as the first point of contact for patients [96]. Care quality, alleviation of provider burdens, management of chronic diseases, misdiagnosis prevention, and healthcare accessibility are all issues that must be addressed [96, 97, 98, 99, 100, 101, 102, 103, 104]. The integration of AI into CBPHC holds immense promise, presenting unprecedented opportunities to improve the aforementioned issues; however, concerns about privacy, consent, algorithm explainability, workflow disruption, and unintended consequences persist [105, 106, 107].

As AI becomes integral to primary care, its potential to contribute to equitable healthcare is significant. However, the complex interaction of AI and socioeconomic factors can either exacerbate or mitigate health inequities [108, 109, 110, 111]. A systematic scoping review of a publicly-funded primary care system explored AI’s impact on health inequities within primary care [15]. The review highlighted algorithmic bias as a recurring theme, emphasizing skewed outcomes caused by datasets not representative of the target population. Trust-related discussions emphasized historical mistrust among marginalized groups of AI, citing privacy and security concerns. The shift toward a more biomedicalized healthcare system through AI interventions raised concerns about potential disadvantages for patients with complex needs. Challenges related to failed implementation could lead to persistent inequities, emphasizing the need for equitable distribution of AI systems in marginalized populations.

As AI continues to shape primary care, healthcare practitioners in family medicine find themselves at the intersection of technological advancement and ethical responsibility. As previously stated, robust frameworks can guide practitioners in ensuring ethical, unbiased, and patient-centered AI integration. A multifaceted approach, including tailored education to increase healthcare professionals’ AI literacy, is critical for successful AI integration into primary care. The proposed competencies offer a comprehensive framework for physicians, faculty, residents, and medical students, ensuring responsible AI integration. As physicians navigate this transformative landscape, the integration of AI tools becomes a nuanced interplay of knowledge, critical thinking, technical finesse, patient communication, and a steadfast commitment to anticipating and addressing the unforeseen consequences of novel technology. AI implementation in family medicine is rooted in the mutual tenets between the physician and the patient, emphasizing interdisciplinary collaboration and co-creation from project conception to completion.

Advertisement

5. Successful implementations of AI in the outpatient clinic setting

PainChek® emerged as a pioneering tool designed with a clear objective: to revolutionize pain assessment for individuals unable to communicate verbally, particularly those suffering from severe dementia. Its fundamental goal was to improve pain management strategies for this vulnerable segment of the population, where traditional assessment methods often fell short. PainChek® used cutting-edge AI to analyze facial microexpressions and other subtle indicators to generate a comprehensive pain score. This scoring system aimed to equip caregivers with crucial insights, aiding their decision-making process regarding the administration of appropriate pain relief measures. Its regulatory clearance from the Australian Therapeutic Goods Administration (TGA) solidified its status as a validated medical device, further demonstrating its credibility and utility in clinical settings. The ethical rationale behind PainChek® was profound: it aspired to fill a critical void in pain assessment for non-verbal patients, potentially enhancing the quality of life for those suffering from severe dementia. Its emergence signaled a significant step forward in addressing an unmet need in healthcare [112].

In a similar vein, IDx-DR emerged as another formidable AI innovation with a distinct objective: early detection of diabetic ocular pathologies. This revolutionary system aimed to bridge the gap by providing advanced analysis in the outpatient setting, thus democratizing access to essential eye care and potentially reducing cases of preventable blindness. Its operational prowess lay in autonomously analyzing retinal images, swiftly delivering accurate diagnoses within a minute. IDx-DR, like PainChek®, sought to democratize specialized diagnostics for patients, demonstrating an ethical foundation. It aimed to alleviate the burden of delayed or inaccessible specialized care, particularly among a demographic prone to eye-related complications, by extending high-level eye care into primary care settings [112].

As with any groundbreaking technology, the implementation of these AI systems raised ethical concerns. Concerns arose about the transparency in communicating the role of AI within PainChek® and IDx-DR, raising questions about accuracy and trustworthiness. Furthermore, the lack of engagement from individuals and community members affected by their respectively targeted conditions, as well as public participation during the development and implementation phases, raised ethical concerns, emphasizing the necessity of aligning technological advances with patient needs and preferences. Additionally, the potential for AI applications to disrupt conventional healthcare communication and trust between professionals and patients highlighted valid concerns about its impact on care relationships.

Addressing these ethical quandaries necessitated multifaceted strategies. Transparent communication regarding AI’s role and limitations in healthcare has emerged as a cornerstone for establishing trust and managing expectations. Active participation by patients and the public throughout the AI lifecycle was deemed essential to ensure technological alignment with their needs and preferences. Furthermore, incorporating patient-reported outcome measures (PROMs) into research has emerged as a vital strategy for comprehensively evaluating the impact of AI on patient experiences and outcomes.

In conclusion, these case studies emphasize the importance of incorporating healthcare values into AI development, advocating for enhanced communication among stakeholders and emphasizing the need for ongoing research to ensure ethically robust AI implementation in healthcare. These principles form the bedrock for navigating the evolving landscape of AI in healthcare while upholding ethical standards and prioritizing patient-centric care.

Advertisement

6. Future perspectives and recommendations

The horizon of healthcare brims with the promise of AI, offering transformative potential within primary care. Several studies have underscored the critical necessity of grounding AI implementation in robust ethical frameworks, prompting recommendations poised to shape the future landscape of healthcare.

The establishment of tailored-made ethical guidelines specifically designed for AI integration into primary care is at the forefront of these recommendations. The studies’ recommended guidelines would address a wide range of ethical concerns. They delve into the realms of data privacy and security, focusing on protecting patient information and confidentiality while utilizing AI algorithms. Furthermore, understanding the role of AI in decision support tools necessitates patient consent and transparency. These guidelines, which aim to reduce bias and ensure fairness, also emphasize the importance of algorithm transparency and the implementation of strong accountability and governance mechanisms. The crux of these recommendations is to foster trust among patients, providers, and stakeholders by adhering to ethical guidelines. By championing these considerations, healthcare systems hope to seamlessly integrate AI while maintaining ethical behavior and patient trust in care delivery.

Another critical aspect outlined in these studies is the urgent need for education and training programs tailored to healthcare providers. These programs aim to fill a significant knowledge gap in the field of AI technology by imparting fundamental AI literacy. They aim to equip healthcare providers with a nuanced understanding of AI’s capabilities and limitations, enabling them to critically evaluate AI tools for reliability, accuracy, and bias. Furthermore, hands-on training modules may enhance smooth incorporation AI into existing operational processes while preserving core clinical competencies. By emphasizing the importance of education and training, these recommendations strive to empower healthcare providers to maximize the benefits of AI tools while upholding ethical standards and core clinical skills.

The studies highlight the delicate balance between technological advances and ethical practices in primary care. Emphasizing collaborative efforts between AI developers, healthcare providers, patients, and policymakers, these recommendations advocate for a symbiotic relationship wherein technological progress aligns harmoniously with ethical principles and patient-centric care. Responsible AI deployment is championed, urging a thorough evaluation of AI’s potential impact on patient-provider relationships and emphasizing AI’s role as a complement rather than a replacement for healthcare providers.

By fostering collaboration and advocating for responsible deployment, these recommendations underscore the need for healthcare systems to harness AI’s potential while safeguarding ethical standards and preserving the invaluable patient-provider relationship.

In essence, the authors offer comprehensive recommendations tailored to ethical guidelines, education and training, and the symbiotic relationship between technological progress and ethical practices in AI implementation within primary care and family medicine. As beacons guiding healthcare systems and policymakers, these suggestions endeavor to pave the way for effective AI integration in primary care settings while upholding ethical principles and ensuring optimal patient care.

Advertisement

7. Conclusion

The integration of AI and machine learning ML into primary care and family medicine residency programs represents a significant advancement with profound ethical implications. This chapter has underscored the transformative potential of AI in revolutionizing healthcare delivery, from enhancing decision-making processes to improving patient outcomes. However, alongside the promising prospects, ethical considerations loom large, spanning legal implications, patient privacy, autonomy, and the evolving role of healthcare professionals.

The methodologies outlined in this chapter, including digital queries and literature review processes, have provided a comprehensive understanding of the multifaceted ethical landscape surrounding AI implementation in primary care. From the ethical considerations for healthcare providers to the impacts on healthcare professionals and training programs, it is evident that responsible AI integration requires a holistic approach, encompassing education, clarity, and collaboration.

Case studies of successful AI implementations, such as PainChek® and IDx-DR, have demonstrated the potential of AI to address unmet needs in pain assessment and diabetic ocular pathologies detection. However, ethical dilemmas, including transparency, accuracy, and trustworthiness, must be carefully addressed to ensure the ethical and responsible deployment of AI technologies.

Looking ahead, future perspectives and recommendations emphasize the importance of establishing tailor-made ethical guidelines, implementing education and training programs for healthcare professionals, and fostering collaborative efforts between stakeholders. By championing responsible deployment and collaboration, healthcare systems can harness the transformative potential of AI while upholding ethical standards and preserving the patient-provider relationship.

In conclusion, this chapter serves as a call to action for healthcare systems and policymakers to navigate the evolving landscape of AI in primary care with integrity, transparency, and a steadfast commitment to patient-centered care. By integrating ethical principles into AI implementation strategies, we can ensure that AI technologies enhance, rather than detract from, the delivery of quality healthcare services in primary care settings.

References

  1. 1. Terry AL et al. Is primary health care ready for artificial intelligence? What do primary health care stakeholders say? BMC Medical Informatics and Decision Making. 2022;22(1):1-11
  2. 2. Kueper JK. Primer for artificial intelligence in primary care. Canadian Family Physician. 2021;67(12):889
  3. 3. Hosny A et al. Artificial intelligence in radiology. Nature Reviews Cancer. 2018;18(8):500-510
  4. 4. Ting DSW et al. Artificial intelligence and deep learning in ophthalmology. British Journal of Ophthalmology. 2018;103(2):167-175
  5. 5. Johnson KW et al. Artificial intelligence in cardiology. Journal of the American College of Cardiology. 2018;71(23):2668-2679
  6. 6. Olczak J et al. Artificial intelligence for analyzing orthopedic trauma radiographs: Deep learning algorithms—Are they on par with humans for diagnosing fractures? Acta Orthopaedica. 2017;88(6):581-586
  7. 7. Niazi MKK, Parwani AV, Gurcan MN. Digital pathology and artificial intelligence. The Lancet Oncology. 2019;20(5):e253-e261
  8. 8. Liyanage H et al. Artificial intelligence in primary health care: Perceptions, issues, and challenges: Primary health care informatics working group contribution to the yearbook of medical informatics 2019. Yearbook of Medical Informatics. 2019;28(1):41
  9. 9. Kueper JK et al. Artificial intelligence and primary care research: A scoping review. The Annals of Family Medicine. 2020;18(3):250-258
  10. 10. Murphy K et al. Artificial intelligence for good health: A scoping review of the ethics literature. BMC Medical Ethics. 2021;22(1):1-17
  11. 11. Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. In: Artificial Intelligence in Healthcare. Cambridge, Massachusetts: Elsevier; 2020. pp. 295-336
  12. 12. Arnold MH. Teasing out artificial intelligence in medicine: An ethical critique of artificial intelligence and machine learning in medicine. Journal of Bioethical Inquiry. 2021;18(1):121-139
  13. 13. Lin SY, Mahoney MR, Sinsky CA. Ten ways artificial intelligence will transform primary care. Journal of General Internal Medicine. 2019;34(8):1626-1630
  14. 14. Lin S. A clinician's guide to artificial intelligence (AI): Why and how primary care should lead the health care AI revolution. The Journal of the American Board of Family Medicine. 2022;35(1):175-184
  15. 15. Abbasgholizadeh Rahimi S et al. Application of artificial intelligence in community-based primary health care: Systematic scoping review and critical appraisal. Journal of Medical Internet Research. 2021;23(9):e29839
  16. 16. d'Elia A et al. Artificial intelligence and health inequities in primary care: A systematic scoping review and framework. Family Medicine and Community Health. 2022;10(Suppl. 1):1-19
  17. 17. London AJ. Artificial intelligence in medicine: Overcoming or recapitulating structural challenges to improving patient care? Cell Reports Medicine. 2022;3(5):1-8
  18. 18. Liaw W et al. Competencies for the use of artificial intelligence in primary care. The Annals of Family Medicine. 2022;20(6):559-563
  19. 19. Liaw W, Kakadiaris I. Artificial intelligence and family medicine: Better together. Family Medicine. 2020;52(1):8-10
  20. 20. Nash DM et al. Perceptions of artificial intelligence use in primary care: A qualitative study with providers and staff of Ontario Community Health Centres. The Journal of the American Board of Family Medicine. 2023;36(2):221-228
  21. 21. Upshaw TL et al. Priorities for artificial intelligence applications in primary care: A Canadian deliberative dialogue with patients, providers, and health system leaders. The Journal of the American Board of Family Medicine. 2023;36(2):210-220
  22. 22. Parkinson P. Fiduciary law and access to medical records: Breen v Williams. The Sydney Law Review. 1995;17(3):433-445
  23. 23. Hassan N. Ransomware Attack on Medstar: Ethical Position Statement. Melbourne, Australia: UMBC Faculty Collection; 2018
  24. 24. Sade RM. Breaches of health information: Are electronic records different from paper records? The Journal of Clinical Ethics. 2010;21(1):39-41
  25. 25. Polito JM. Ethical considerations in internet use of electronic protected health information. The Neurodiagnostic Journal. 2012;52(1):34-41
  26. 26. Kluge E-H, Lacroix P, Ruotsalainen P. Ethics certification of health information professionals. Yearbook of Medical Informatics. 2018;27(01):037-040
  27. 27. Brisson GE et al. A framework for tracking former patients in the electronic health record using an educational registry. Journal of General Internal Medicine. 2018;33:563-566
  28. 28. De Simone DM. When is accessing medical records a HIPAA breach? Journal of Nursing Regulation. 2019;10(3):34-36
  29. 29. Stern RJ. Teaching medical students to engage meaningfully and judiciously with patient data. JAMA Internal Medicine. 2016;176(9):1397-1397
  30. 30. Evans EL, Whicher D. What should oversight of clinical decision support systems look like? AMA Journal of Ethics. 2018;20(9):857-863
  31. 31. Portability I, Act A. Guidance Regarding Methods for De-Identification of Protected Health Information in Accordance with the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule. Washington D.C.: Human Health Services; 2012
  32. 32. Porsdam Mann S, Savulescu J, Sahakian BJ. Facilitating the ethical use of health data for the benefit of society: Electronic health records, consent and the duty of easy rescue. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2016;374(2083):20160130
  33. 33. Schaefer GO, Emanuel EJ, Wertheimer A. The obligation to participate in biomedical research. JAMA. 2009;302(1):67-72
  34. 34. Allhoff F. Free-riding and research ethics. The American Journal of Bioethics. 2005;5(1):50-51
  35. 35. McCann SK, Campbell MK, Entwistle VA. Reasons for participating in randomised controlled trials: Conditional altruism and considerations for self. Trials. 2010;11:1-10
  36. 36. Grote T, Berens P. On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics. 2019;46(3):205-211
  37. 37. Martinez-Martin N, Dunn LB, Roberts LW. Is it ethical to use prognostic estimates from machine learning to treat psychosis? AMA Journal of Ethics. 2018;20(9):E804
  38. 38. Luxton DD. Should Watson be consulted for a second opinion? AMA Journal of Ethics. 2019;21(2):131-137
  39. 39. González WJ. From intelligence to rationality of minds and machines in contemporary society: The sciences of design and the role of information. Minds and Machines. 2017;27:397-424
  40. 40. Karches KE. Against the iDoctor: Why artificial intelligence should not replace physician judgment. Theoretical Medicine and Bioethics. 2018;39(2):91-110
  41. 41. Benzmüller C, Paleo BW. The inconsistency in Gödel’s ontological argument: A success story for AI in metaphysics. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. New York, New York: AAAI Press; 2016
  42. 42. Schönberger D. Artificial intelligence in healthcare: A critical analysis of the legal and ethical implications. International Journal of Law and Information Technology. 2019;27(2):171-203
  43. 43. Kaczmarczyk JM et al. E-professionalism: A new frontier in medical education. Teaching and Learning in Medicine. 2013;25(2):165-170
  44. 44. Masters K. Preparing medical students for the e-patient. Medical Teacher. 2017;39(7):681-685
  45. 45. Osborne R, Kayser L. Skills and characteristics of the e-health literate patient. BMJ: British Medical Journal. 2018;361:1
  46. 46. Yeung K. ‘Hypernudge’: Big data as a mode of regulation by design. In: The Social Power of Algorithms. New York, New York: Routledge; 2019. pp. 118-136
  47. 47. Grundmann R. The problem of expertise in knowledge societies. Minerva. 2017;55(1):25-48
  48. 48. Mosier KL, Skitka LJ. Human decision makers and automated decision aids: Made for each other? In: Automation and Human Performance. Boca Raton, Florida: CRC Press; 2018. pp. 201-220
  49. 49. Cohen MR, Smetzer JL. Understanding human over-reliance on technology; It's Exelan, not Exelon; crash cart drug mix-up; risk with entering a “test order”. Hospital Pharmacy. 2017;52(1):7-12
  50. 50. Wangberg SC et al. Relations between Internet use, socio-economic status (SES), social support and subjective health. Health Promotion International. 2008;23(1):70-77
  51. 51. Hofmann B, Svenaeus F. How medical technologies shape the experience of illness. Life Sciences, Society and Policy. 2018;14(1):1-11
  52. 52. Beam AL, Kohane IS. Big data and machine learning in health care. JAMA. 2018;319(13):1317-1318
  53. 53. Fraccaro P et al. Presentation of laboratory test results in patient portals: Influence of interface design on risk interpretation and visual search behaviour. BMC Medical Informatics and Decision Making. 2018;18(1):1-12
  54. 54. Israni ST, Verghese A. Humanizing artificial intelligence. JAMA. 2019;321(1):29-30
  55. 55. Klugman CM et al. The ethics of smart pills and self-acting devices: Autonomy, truth-telling, and trust at the dawn of digital medicine. The American Journal of Bioethics. 2018;18(9):38-47
  56. 56. Gerke S et al. Ethical and legal issues of ingestible electronic sensors. Nature Electronics. 2019;2(8):329-334
  57. 57. Obermeyer Z et al. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447-453
  58. 58. Emanuel EJ, Wachter RM. Artificial intelligence in health care: Will the value match the hype? JAMA. 2019;321(23):2281-2282
  59. 59. Charow R et al. Artificial intelligence education programs for health care professionals: Scoping review. JMIR Medical Education. 2021;7(4):e31043
  60. 60. Wood MJ et al. The need for a machine learning curriculum for radiologists. Journal of the American College of Radiology. 2019;16(5):740-742
  61. 61. Tajmir SH, Alkasab TK. Toward augmented radiologists: Changes in radiology education in the era of machine learning and artificial intelligence. Academic Radiology. 2018;25(6):747-750
  62. 62. Srivastava TK, Waghmare L. Implications of artificial intelligence (AI) on dynamics of medical education and care: A perspective. Journal of Clinical and Diagnostic Research. 2020;14(3):1-2
  63. 63. Sit C et al. Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: A multicentre survey. Insights Into Imaging. 2020;11:1-6
  64. 64. Sánchez-Mendiola M et al. Evaluation of a biomedical informatics course for medical students: A pre-posttest study at UNAM Faculty of medicine in Mexico. BMC Medical Education. 2015;15:1-12
  65. 65. Park SH et al. What should medical students know about artificial intelligence in medicine? Journal of Educational Evaluation for Health Professions. 2019;16:1-6
  66. 66. Paranjape K et al. Introducing artificial intelligence training in medical education. JMIR Medical Education. 2019;5(2):e16048
  67. 67. McCoy LG et al. What do medical students actually need to know about artificial intelligence? npj Digital Medicine. 2020;3(1):86
  68. 68. Masters K. Artificial intelligence in medical education. Medical Teacher. 2019;41(9):976-980
  69. 69. Kolachalama VB, Garg PS. Machine learning and medical education. npj Digital Medicine. 2018;1(1):54
  70. 70. Kang SK et al. Residents’ introduction to comparative effectiveness research and big data analytics. Journal of the American College of Radiology. 2017;14(4):534-536
  71. 71. McBride A et al. Artificial intelligence in radiology residency training. In: Seminars in Musculoskeletal Radiology. New York, NY, USA: Thieme Medical Publishers; 2020
  72. 72. Chan KS, Zary N. Applications and challenges of implementing artificial intelligence in medical education: Integrative review. JMIR Medical Education. 2019;5(1):e13930
  73. 73. Brouillette M. AI added to the curriculum for doctors-to-be. Nature Medicine. 2019;25(12):1808-1809
  74. 74. Barbour AB et al. Artificial intelligence in health care: Insights from an educational forum. Journal of Medical Education and Curricular Development. 2019;6:2382120519889348
  75. 75. Matheny ME, Whicher D, Israni ST. Artificial intelligence in health care: A report from the National Academy of Medicine. JAMA. 2020;323(6):509-510
  76. 76. Sapci AH, Sapci HA. Artificial intelligence education and tools for medical and health informatics students: Systematic review. JMIR Medical Education. 2020;6(1):e19285
  77. 77. Wartman SA, Combs CD. Reimagining medical education in the age of AI. AMA Journal of Ethics. 2019;21(2):146-152
  78. 78. Wojda T et al. AI in Healthcare: Implications for Family Medicine and Primary Care. Rijeka, Croatia: Intech Open; 2023
  79. 79. Wiljer D, Hakim Z. Developing an artificial intelligence–enabled health care practice: Rewiring health care professions for better care. Journal of Medical Imaging and Radiation Sciences. 2019;50(4):S8-S14
  80. 80. Wiens J et al. Do no harm: A roadmap for responsible machine learning for health care. Nature Medicine. 2019;25(9):1337-1340
  81. 81. Li D, Kulasegaram K, Hodges BD. Why we needn’t fear the machines: Opportunities for medicine in a machine learning world. Academic Medicine. 2019;94(5):623-625
  82. 82. Rajkomar A et al. Scalable and accurate deep learning with electronic health records. npj Digital Medicine. 2018;1(1):18
  83. 83. Smith CD et al. Implementing optimal team-based care to reduce clinician burnout. NAM Perspectives. 2018;8(9):1-13
  84. 84. Yeatts JP, Sangvai DG. HCC coding, risk adjustment, and physician income: What you need to know. Family Practice Management. 2016;23(5):24-27
  85. 85. Rajkomar A et al. Weighting primary care patient panel size: A novel electronic health record-derived measure using machine learning. JMIR Medical Informatics. 2016;4(4):e6530
  86. 86. Stein N, Brooks K. A fully automated conversational artificial intelligence for weight loss: Longitudinal observational study among overweight and obese adults. JMIR Diabetes. 2017;2(2):e8590
  87. 87. Lin SY, Shanafelt TD, Asch SM. Reimagining clinical documentation with artificial intelligence. In: Mayo Clinic Proceedings. Rochester, Minnesota: Elsevier; 2018
  88. 88. Haenssle HA et al. Man against machine: Diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Annals of Oncology. 2018;29(8):1836-1842
  89. 89. Hannun AY et al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nature Medicine. 2019;25(1):65-69
  90. 90. Poplin R et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomedical Engineering. 2018;2(3):158-164
  91. 91. Sinsky CA, Sinsky TA, Rajcevich E. Putting pre-visit planning into practice. Family Practice Management. 2015;22(6):30-38
  92. 92. Keel S et al. Feasibility and patient acceptability of a novel artificial intelligence-based screening model for diabetic retinopathy at endocrinology outpatient services: A pilot study. Scientific Reports. 2018;8(1):4330
  93. 93. Al-Taee MA et al. Acceptability of robot assistant in management of type 1 diabetes in children. Diabetes Technology & Therapeutics. 2016;18(9):551-554
  94. 94. Rantanen P et al. An in-home advanced robotic system to manage elderly home-care patients’ medications: A pilot safety and usability study. Clinical Therapeutics. 2017;39(5):1054-1061
  95. 95. Sinsky CA, Privitera MR. Creating a “manageable cockpit” for clinicians: A shared responsibility. JAMA Internal Medicine. 2018;178(6):741-742
  96. 96. Abbasgholizadeh Rahimi S et al. Application of artificial intelligence in shared decision making: Scoping review. JMIR Medical Informatics. 2022;10(8):e36199
  97. 97. Bodenheimer T, Chen E, Bennett HD. Confronting the growing burden of chronic disease: Can the US health care workforce do the job? Health Affairs. 2009;28(1):64-74
  98. 98. Howard J et al. Electronic health record impact on work burden in small, unaffiliated, community-based primary care practices. Journal of General Internal Medicine. 2013;28:107-113
  99. 99. Cecil E et al. Patient and health-care factors associated with potentially missed acute deterioration in primary care: A retrospective observational study of linked primary and secondary care data. The Lancet. 2019;394:S30
  100. 100. De Lusignan S et al. Miscoding, misclassification and misdiagnosis of diabetes in primary care. Diabetic Medicine. 2012;29(2):181-189
  101. 101. Casas Herrera A et al. COPD underdiagnosis and misdiagnosis in a high-risk primary care population in four Latin American countries. A key to enhance disease diagnosis: The PUMA study. PLoS ONE. 2016;11(4):e0152266
  102. 102. Lanzarotto F et al. Is under diagnosis of celiac disease compounded by mismanagement in the primary care setting?. A survey in the Italian Province of Brescia. Minerva Gastroenterologica e Dietologica. 2004;50(4):283-288
  103. 103. Statham MO, Sharma A, Pane AR. Misdiagnosis of acute eye diseases by primary health care providers: Incidence and implications. The Medical Journal of Australia. 2008;189(7):402-404
  104. 104. Haggerty JL et al. Room for improvement: Patients’ experiences of primary care in Quebec before major reforms. Canadian Family Physician. 2007;53(6):1056-1057
  105. 105. Sun TQ , Medaglia R. Mapping the challenges of artificial intelligence in the public sector: Evidence from public healthcare. Government Information Quarterly. 2019;36(2):368-383
  106. 106. Shaw J et al. Artificial intelligence and the implementation challenge. Journal of Medical Internet Research. 2019;21(7):e13659
  107. 107. Yu K-H, Kohane IS. Framing the challenges of artificial intelligence in medicine. BMJ Quality and Safety. 2018;28(3):238-241
  108. 108. Veinot TC, Mitchell H, Ancker JS. Good intentions are not enough: How informatics interventions can worsen inequality. Journal of the American Medical Informatics Association. 2018;25(8):1080-1088
  109. 109. Popay J et al. Social problems, primary care and pathways to help and support: Addressing health inequalities at the individual level. Part I: The GP perspective. Journal of Epidemiology and Community Health. 2007;61(11):966
  110. 110. Fiske A, Henningsen P, Buyx A. Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Journal of Medical Internet Research. 2019;21(5):e13216
  111. 111. Blease C et al. Computerization and the future of primary care: A survey of general practitioners in the UK. PLoS ONE. 2018;13(12):e0207418
  112. 112. Rogers WA, Draper H, Carter SM. Evaluation of artificial intelligence clinical applications: Detailed case analyses show value of healthcare ethics approach in identifying patient care issues. Bioethics. 2021;35(7):623-633

Written By

Thomas Wojda, Carlie Hoffman, Kevin Kindler, Amishi Desai and Shyam Visweswaran

Submitted: 13 February 2024 Reviewed: 27 March 2024 Published: 29 April 2024