Open access peer-reviewed chapter - ONLINE FIRST

Generative AI in Education: Technical Foundations, Applications, and Challenges

Written By

Sheikh Faisal Rashid, Nghia Duong-Trung and Niels Pinkwart

Submitted: 15 March 2024 Reviewed: 31 March 2024 Published: 20 May 2024

DOI: 10.5772/intechopen.1005402

Artificial Intelligence for Quality Education IntechOpen
Artificial Intelligence for Quality Education Edited by Seifedine Kadry

From the Edited Volume

Artificial Intelligence and Education - Shaping the Future of Learning [Working Title]

Dr. Seifedine Kadry

Chapter metrics overview

123 Chapter Downloads

View Full Metrics

Abstract

Generative artificial intelligence (AI) (GenAI) has emerged as a transformative force in various fields, and its potential impact on education is particularly profound. This chapter presents the development trends of “GenAI in Education” by exploring the technical background, diverse applications, and multifaceted challenges associated with its adoption in education. The chapter briefly introduces the technical background of GenAI, particularly the development of large language models (LLMs) such as ChatGPT & Co. It provides key concepts, models, and recent technological advances. The chapter then navigates through the various applications of GenAI or LLMs in education, examining their impact on different levels of education, including school, university, and vocational training. The chapter will highlight how GenAI is reshaping the educational landscape through real-world examples and case studies, from personalized learning experiences to content creation and assessment. It also discusses various technical, ethical, and organizational/educational challenges to using technology in education.

Keywords

  • generative AI
  • large language models
  • ChatGPT
  • generative AI in education
  • educational technologies

1. Introduction

In recent years, artificial intelligence (AI) has made remarkable progress in all areas of life, including education. Particularly with advances in generative AI (GenAI), its applications in teaching and learning have attracted stakeholders like students, educators, researchers, and educational institutes worldwide [1]. The development of tools, such as ChatGPT and Dall-E, has further enriched the field by engaging students and teachers in real-time conversations with large language models (LLMs) to create expressive artwork and digital images. Generative artificial intelligence (GenAI) is at the forefront of artificial intelligence and machine learning, focusing on generating new content, such as text, images, music, video, code, or other data types. Unlike traditional AI methods that categorize data into predefined classes, GenAI models learn the underlying patterns and relationships within data to generate entirely new content. This ability of GenAI models to create new content offers immense potential for revolutionizing the educational landscape. Some examples include applications that facilitate personalized learning experiences tailored to the unique needs of each learner; increased accessibility for students facing challenges, such as learning disabilities, anxiety, or language barriers; and support for a variety of tasks, such as coding, writing, art, music, etc. In addition, GenAI enables teachers to provide constructive feedback at scale, fostering iterative learning and improving writing skills. Furthermore, GenAI helps educational institutions provide customized support and information to students, automate tasks such as scheduling events, and generate promotional content, etc.

Despite the potential innovations and opportunities, concerns and risks are also associated with using GenAI in education. For example, the ability of ChatGPT to correctly answer many practice and exam questions has raised concerns among many educational stakeholders about the use of artificially generated solutions [2]. Teachers’ initial concerns are that students may use ChatGPT to cheat on their assignments. Teachers may struggle to determine the difference between artificially generated and human-generated content. This scenario leads to undermining the whole assessment and grading process, and a natural demand has arisen to detect the content of GenAI via some other AI tools, such as TurnItIn, GPTZero, or OpenAI’s classifier. However, in the absence of other evidence, technical methods currently need to be more helpful in regulating the use of AI in the classroom. The Education Technology Lab (EDTec Lab), German Research Center for Artificial Intelligence (DFKI) Berlin, advocates an objective assessment of the potential and limitations of the technology and warns against software that claims to be able to automatically recognize text generated by ChatGPT [3]. Other concerns include personal data privacy and security, academic dishonesty’s consequences, students’ overestimation of GenAI’s capabilities and trustworthiness, and the inadvertent reinforcement of biases through system output and user interaction. It has led to a debate, in which some educational institutions have banned the use of ChatGPT or similar GenAI tools for students. In contrast, others have welcomed the ethical and transparent use of GenAI tools in education. Policymakers and educators have initiated disseminating guidelines for students, teachers, and educational institutions aiming to promote academic integrity, ensure accessibility, and encourage ethical applications of this technology in educational settings [4, 5, 6].

Generative AI (GenAI) has the potential to formulate knowledge models and guide cognitive activities, aid learning while learners actively engage as collaborators, and empower learners to control their own learning experiences. These tools reflect complex thought processes fundamental to human understanding and are potent resources for students and educators. Innovations like AI-generated content and adaptive learning platforms are reshaping how educational content is delivered and consumed, making learning more accessible and tailored to individual learner’s needs [1]. This chapter aims to provide a comprehensive overview of GenAI in education, briefly describe technical foundations, and highlight its emergent opportunities while acknowledging the challenges that must be addressed to ensure its responsible and equitable implementation.

Advertisement

2. Technical foundations of generative AI

2.1 What is generative AI?

Before ChatGPT was launched in late 2022, the public domain was primarily influenced by what’s known as discriminative AI. This form of AI specializes in sorting and categorizing information, serving as a foundational tool in many applications. However, the narrative began to shift with the rise of generative AI, particularly marked by the public availability of models like ChatGPT. Despite sharing common technological underpinnings, generative and discriminative AI diverge significantly in their objectives. Discriminative AI is adept at discerning and differentiating between various data categories. In contrast, generative AI aims to synthesize new content or outputs, drawing upon the inputs it receives and the extensive data understanding it has developed through its training [7].

Generative AI is a type of AI technology that autonomously creates content in response to natural language prompts through conversational interfaces. Unlike traditional methods that mainly curate content from existing web sources, generative AI actively generates new material. This content spans various formats embodying different aspects of human cognition, including natural language texts, images, videos, music, and even software codes. Training generative AI models frequently involve unsupervised learning methods, which allow these models to process and comprehend large volumes of data autonomously. The goal is to discern patterns in the structure or creation of these data, enabling the AI to emulate and reproduce these patterns.

2.2 Text generative AI

Text generative AI employs a specific type of artificial neural network (ANN) called a general-purpose transformer. Within this category, a particular form called a large language model (LLM) is prevalent. These systems are often called LLMs due to their extensive linguistic capabilities. The type of LLM utilized in text generation AI, such as ChatGPT, is known as a generative pre-trained transformer, or GPT. The ‘GPT’ in ‘ChatGPT’ reflects this underlying technology.

ChatGPT is built on GPT-4, a version developed by OpenAI, which represents the latest evolution in their GPT series (as of the time the book chapter was written). Each new iteration of OpenAI’s GPT has shown significant improvements over its predecessors, driven by advancements in AI architectures, training methodologies, and optimization techniques. A notable aspect of its continuous development is the increasing scale of data used for training and the exponential growth in the number of parameters.

These parameters are akin to metaphorical knobs, fine-tuning the performance of the GPT. They include the model’s ‘weights’, and numerical values dictating how it processes input data and generates output. As of GPT-4, the model substantially increases these parameters, enabling more nuanced and sophisticated responses than earlier versions. This evolution reflects OpenAI’s commitment to enhancing the capabilities and accuracy of its language models, ensuring they remain at the forefront of AI technology in text generation.

2.3 Image generative AI

Image generative AI, such as Dall-E and Stable Diffusion, primarily leverage Diffusion Models, a type of ANN that differs from traditional generative adversarial networks (GANs). These Diffusion Models operate by gradually learning to reverse a process that adds random noise to an image. Initially, the model introduces noise to a clear image, transforming it into a completely random pattern. It then learns to reverse this noise addition during generation, effectively reconstructing the original image from randomness.

This process begins with a random pattern and then iteratively refines this pattern into a coherent image that aligns with a given prompt. Unlike GANs, which involve a generator and discriminator working in opposition, Diffusion Models rely on a single neural network that predicts how to remove noise at each step. Over numerous iterations, this network becomes adept at creating detailed and realistic images from noisy inputs.

Diffusion Models have shown remarkable capability in generating high-quality images that are often more detailed and varied than those produced by GANs. They excel in creating diverse outputs based on textual descriptions, from realistic photographs to artistic renderings. This advancement in AI-driven image generation represents a significant shift from the GAN-based approach, offering more flexibility and potential for creative applications.

2.4 Preliminary of multimodal generative AI

Text and image-generative AI models excel in text-image-based tasks but are limited to their specific data types and cannot simultaneously process images, videos, or audio. This limitation is crucial in real-world applications where multimodal data are shared. To bridge this gap, Multimodal Large Language Models (MLLMs) [8] and Large Vision Models (LVMs) [9] have been developed. MLLMs and LVMs combine a large language model with multimodal adaptors and various diffusion decoders, allowing them to process and generate outputs across different media formats.

The concept of multimodality in these models is inspired by human communication, which often involves multiple channels. MLLMs and MLVMs are trained on extensive multimodal datasets, including image captions, video descriptions, and audio transcripts. They can recognize patterns in these data and generate coherent outputs that match the input modality. Despite these advancements, integrating new modalities into existing models remains challenging. It requires extensive data that include the new modality and often necessitate retraining from scratch, demanding significant computational resources and data quality efforts. Several recent frontier research studies have been devoted to this future direction, for example, Emu2 [10] and Google Gemini [11].

Overall, AI is moving towards more adaptable, efficient, and versatile systems capable of handling a broader range of tasks and data types, reflecting a more holistic approach to artificial intelligence.

Advertisement

3. Applications of generative AI in education

Generative AI (GenAI) can be used in a variety of educational contexts, from creating innovative content and generating personalized learning materials to automating assessment and feedback. For example, students can use GenAI to help with homework or explore creative writing and art, educators can use these tools to create engaging teaching materials or to provide personalized learning experiences, and institutes can use GenAI to improve their administrative and educational services. GenAI applications can be used as standalone tools or integrated into other systems or platforms. In conjunction with different forms of AI, GenAI models, especially LLMs, can enhance and support learning activities at all levels of education. A recent report published jointly by the German Research Center for Artificial Intelligence (DFKI) and mmb Institut GmbH for the Deutsche Telekom Foundation highlighted applications of AI in school education [12]. The report mentions the practical use cases of GenAI models like ChatGPT in school education. It provides a structured overview of AI systems benefiting learners, educators, and institutes and shaping everyday school life in the future. In the context of INVITE -innovation competition funded by the German Federal Ministry of Education and Research (BMBF), VDI/VDE-IT comprehensively introduces LLMs and their transformative impact on continuing vocational training [13]. Kasneci et al. presented many possibilities that can be realized using GenAI models such as ChatGPT or other LLMs [14]. They highlighted how these models can be used to create educational content, improve student engagement and interaction, and personalize learning experiences while emphasizing that the use of LLMs in education requires students and teachers to develop a set of competencies and literacies necessary to understand both the technology and its limitations, as well as the unexpected brittleness of such systems.

The applications of AI in education can be divided into three categories: student-supporting applications, teacher-supporting applications, and institute- or system-supporting applications [15]. In addition to providing support in these three categories, they also have the potential to promote learning activities for children with disabilities, support online and collaborative learning, and boost professional development and training. This section provides an overview of some of the applications of GenAI supporting learning and educational activities in these directions.

3.1 Student-supporting applications

Generative AI (GenAI) has the potential to serve as a learning companion and support educational activities at all levels. Students can use GenAI as a general search tool to get answers to their immediate questions about a specific topic, get support in completing homework or preparing for their exams, and learn new skills.

3.1.1 Support in primary to higher education

In primary school education, GenAI can help children improve their reading and writing skills by suggesting syntactic and grammatical improvements. It can also improve writing style, develop critical thinking skills, and improve reading comprehension by providing students with summaries and explanations of complex texts, making the material more accessible. In a recent study, Han and Cai present a visual storytelling prototype based on generative AI tools, such as ChatGPT, Stable Diffusion, and Midjourney, for children’s creative expression, storytelling, and literacy development. It was found that generative AI could significantly enhance reading and writing skills and promote creative storytelling and literacy development in young learners [16]. Another study on the use of GenAI, specifically ChatGPT 3.5 and 4, in primary school education, showed its potential to personalize learning material and cater to students’ diverse knowledge and learning abilities. The study involved 110 students and demonstrated that generative AI could support motivated learning and skill development, suggesting a promising future for its use in school education [17].

Generative AI (GenAI) can provide valuable support for middle and high school students in language acquisition and mastering different writing styles in subjects such as language and literature. GenAI can also help students prepare for their exams and assessments. GenAI tools can generate practice problems and quizzes for subjects like mathematics and physics that help improve the studied material’s understanding, contextualization, and retention. They also help to improve problem-solving skills by providing students with detailed explanations, step-by-step solutions, and engaging supplementary questions. This approach clarifies the reasoning behind solutions, promotes analytical thinking, and encourages creative problem-solving strategies. There is also a potential for GenAI, specifically LLMs, to enhance science, technology, engineering, and mathematics (STEM) education with a multimodal analogical reasoning approach. A study by Cao et al. demonstrated how GenAI transforms intricate principles in mathematics, physics, and programming into comprehensible metaphors and converts them into visual form to augment the educational experience further [18]. Khanmigo.ai, for example, powered by GenAI tools, provides a whole bunch of services to support students in improving their writing and critical thinking skills, supporting them in preparation for their maths quizzes, and even helping them with real-time feedback, debate, and collaboration. Similarly, tools.fobizz.com provides a suite of GenAI-powered tools to support both educators and learners.

At the university level, GenAI models such as ChatGPT are recognized for their role in supporting research, completing assignments, and fostering the development of critical thinking and problem-solving skills. These tools can enhance students’ research skills by providing access to comprehensive information and relevant resources on specific topics. Students can use these tools to create summaries, organized outlines, and initial sketches of their intended research subject. These tools can also help students efficiently comprehend important concepts and simplify their writing processes. Furthermore, LLMs can be used to find unexplored areas and current research trends, enhancing students’ understanding of the subject matter and facilitating analytical analysis [14]. For example, Castillo-Segura et al. demonstrate the potential of generative AI in accelerating the research process, specifically in conducting systematic literature reviews in academia. This research compares six GenAIs (Forefront, GetGPT, ThebAI, Claude, Bard, and H2O) with their respective large language models (LLMs) when classifying 596 articles in the screening phase of a systematic review in the field of medical education. It has been observed that Generative AI tools can significantly reduce the time and effort required for systematic literature reviews (SLRs) by facilitating article identification and classification, demonstrating the practical benefits of AI in academic research [19]. Jonsson & Tholander present a study of a group of university students using generative machine learning to translate from natural language to computer code. The study explores how AI can be understood in terms of co-creation, focusing on how AI may serve as a resource for understanding and learning and, on the other hand, it affects the creative processes [20].

3.1.2 Support in professional education

The integration of AI into daily life and the workplace is increasing, making it crucial to prepare students for future technological demands. According to the World Economic Forum, AI integration will result in a mixed job outlook by 2027, with 25% of companies anticipating job losses and 50% expecting job growth. This trend highlights the significance of providing students with skills in emerging technologies [21]. Familiarity with AI and chatbots may become essential for entering the workforce. Furthermore, technology companies like Google and Microsoft have already announced plans to integrate AI into their products, indicating the increasing prevalence of AI in various products and services [22, 23]. Given these developments, the use of AI technologies in education prepares students for future job markets and offers opportunities to enhance learning beyond traditional methods.

Large language models (LLMs) have the potential to enhance professional training in a wide range of fields by providing tailored support for the development of specific competencies that are critical for different professional environments. By fine-tuning LLMs on domain-specific corpora, these models can generate industry-specific language and help learners acquire the specialized vocabulary and stylistic nuances needed to write technical reports. This extends the capabilities of LLMs beyond general language processing to specialized training tools that can simulate real-world professional writing and communication scenarios. For example, the adaptability of LLMs to specific professional needs, such as programming, report writing, project management, decision-making, etc., underlines their usefulness in enhancing skills critical to modern workplaces. Tools such as GitHub Copilot, powered by OpenAI’s Codex, show how AI can assist with real-time coding tasks, suggesting code completions and providing programming insights directly within the coding environment [24]. Such tools can help generate code, understand programming languages, and even offer debugging support, making them invaluable tools for novice and experienced programmers. The role of LLMs in professional training also emphasizes the importance of domain-specific tuning and the integration of professional expertise to maximize their effectiveness [14, 25]. In the medical field, for example, LLMs can simulate patient interactions, help with the generation of medical reports, and even assist in medical research by providing up-to-date information and generating hypotheses based on current medical literature [26, 27]. Pavlik discussed the potential of generative AI platforms such as ChatGPT for journalism and media education, highlighting the importance of understanding the capabilities and limitations of such technologies [28]. However, integrating LLMs into professional education requires a collaborative approach involving educators, industry professionals, and AI developers. This collaboration ensures that LLMs are fine-tuned for technical accuracy, ethical considerations, professionalism, and the nuanced understanding required in specific fields.

3.1.3 Support in learning with disabilities

Generative AI can help people with disabilities or neurodivergent in the context of education, work, and leisure. By developing inclusive learning strategies, GenAI can help create adaptive writing tools, translate complex texts into more accessible formats, and highlight key content across different media, ensuring that educational content is accessible to all students, regardless of their disabilities. For example, GenAI-powered applications such as “goblin.tools” can help children with Attention Deficit Hyperactivity Disorder (ADHD) to simplify tasks they find overwhelming or difficult [29]. LLMs can be seamlessly integrated with speech-to-text and text-to-speech solutions to assist the students with visual impairments. Earlier research into the use of video models to teach generative spellings to a child with autism spectrum disorder also showed a potential of employing GenAI for the generation of such video models [30]. For example, GenAI models can be integrated into virtual reality (VR) and extended reality (XR) technologies and can provide a wide range of innovative use cases and interaction concepts that can help to reduce barriers for people with specific physical or mental needs, e.g., through simulations, gamification, or training scenarios [31, 32].

Because GenAI has the potential to reflect and perpetuate societal biases, including those related to disability, the use of these technologies must be undertaken with careful consideration of the ethical implications, potential biases, and the need for professional oversight to ensure that the benefits of LLMs are shared equitably among all learners, including those with disabilities [33]. There is also a need to work with therapists, educators, and other professionals to meet the specific needs of learners. The interactions of people with disabilities with LLMs also highlight the need to involve those with lived experience of disability in the development and training of LLMs to ensure that these models serve as empowering tools rather than sources of further marginalization [34].

3.2 Teacher-supporting applications

Teachers use GenAI to enhance their pedagogical activities and ensure students develop the desired learning outcomes. GenAI helps them create tailored learning materials, generate assignments and quizzes, provide feedback, and assist in developing or assessing student exams [14].

3.2.1 Support in creation of learning material

Generative AI (GenAI), specifically LLMs, can be used to create learning content, exercises, quizzes, presentation slides, etc., for a wide range of subjects and educational levels that can be adapted to meet the diverse needs of students [35]. For example, LLMs can support curriculum development, teaching methodologies, personalized study plans and learning materials, student assessment, and more in medical education [36]. Similarly, Rüdian & Pinkwart presented the use of LLMs (ChatGPT 3.5) in generating learning content for a concrete micro-learning template in German language teaching. Teachers provide a topic as input, and the approach then elicits the required information with instructional prompts and combines responses into a language learning unit. The quality of the generated learning units was assessed for correctness and appropriateness. The results showed the best performance, but the need for “teacher-in-the-loop” is suggested [37]. Based on some existing ideas, teachers can also use GenAI to create innovative discussion topics, animations, and short stories to enhance student engagement or group discussion. Considering the example of creating model texts, LLMs can be asked to “create a discussion about the use of AI in school” and the generated texts can then be adapted by the teacher according to the didactic goal of the learning content [12].

3.2.2 Support in teaching activities

Generative AI (GenAI) can help teachers provide personalized learning experiences for students by analyzing their responses to specific learning tasks. GenAI can give feedback, hints, or suggestions for learning tasks or generate materials matching the student’s learning needs or skills. For example, LLMs can support teachers in creating inclusive learning activities, questions, and assessment exercises or quizzes targeted to students at different levels of knowledge, ability, and learning styles. Phung et al. present a study on programming showing how GenAI can improve STEM education. Their research highlights the broader capabilities of GenAI to support learning and human tutors in different programming education scenarios. It does so by providing personalized digital tutoring, generating hints, and creating tasks and explanations [38]. Such support can save teachers time and effort in creating personalized materials and allow them to concentrate on other aspects of teaching.

3.2.3 Support in grading of assignments

Generative AI (GenAI) can support teachers by automating the grading of assignments or exams and providing immediate feedback to students. For instance, LLMs can be used to identify potential strengths or weaknesses in written essays or other writing assignments and provide individualized feedback to students [39, 40]. It speeds up the individual evaluation process and allows for more consistent and objective grading [41]. Furthermore, LLMs can also be used to detect plagiarism, which can help to prevent cheating on submitted writing assignments. Additional AI tools can also be used to analyze assessment data to identify trends, such as common areas where students struggle, enabling targeted interventions to support learning.

3.3 System-supporting applications

3.3.1 Support in administration activities

Large language models (LLMs) can support educational institutions through chatbots to provide instant answers to questions on various administrative topics. For example, LLMs can be used to respond to queries from potential applicants and provide them with up-to-date information. These models can help existing students register for courses and provide administrative information, such as courses, exams, schedules, etc. Students can also use LLMs-based chatbots to find news or other information. LLMs-based chatbots can also be set up for international students and staff to provide multilingual information to students. In addition, LLMs can generate offers or advertisements of learning opportunities based on various factors, such as target audience, age group, gender, location, etc.

Another potential application is the automated tagging of learning resources to provide metadata for effective management and efficient discovery of learning resources [42]. LLMs can be used to develop solutions for automated or semiautomated generation of metadata fields from learning resources using explicitly defined metadata standards. It will significantly facilitate the implementation of personalized learning or intelligent tutoring systems by making it easier to find appropriate learning content without human intervention.

3.3.2 Support in tutoring and mentoring

Generative AI (GenAI) can power intelligent tutoring systems that provide personalized guidance and support to students. These systems can analyze student responses, identify misconceptions, and generate customized explanations or additional practice materials to address individual learning needs. AI tutors can adapt their teaching strategies based on student progress and learning styles, creating a more effective and tailored learning environment. The integration of LLMs into existing learning management systems (LMS) can provide tutoring or mentoring support to students as an educational chatbot. One such example is the “tech4compKI,” a Federal Ministry of Education and Research (BMBF)-funded project of the Educational Technology Lab, DFKI Berlin, where students ask questions and LLM-supported chatbot “BiWi AI Tutor” retrieves information from material (structured knowledge of the module, learning material from lectures and seminars as well as organized information) and analyzes it to answer questions.

3.3.3 Support in collaborative and remote learning

Generative AI (GenAI) can simulate a collaborative learning environment by acting as a peer to provide support in various collaborative learning tasks, such as peer discussion, to explore a research question or a discussion topic. These models can give immediate feedback on writing artifacts or generate new ideas through peer discussion. One example is PEER (Plan, Edit, Explain, Repeat), a collaborative language model trained to imitate the entire writing process. The model can draft, make suggestions, propose edits, and explain its actions. The model also uses self-learning techniques to adapt to new areas of learning while demonstrating strong performance in different domains and editing tasks [43]. The use of GenAI models in online education systems has the potential to transform remote and group learning by providing interactive, responsive, and tailored responses to each student. In group and remote learning environments, GenAI models can provide a structure for discussion, offer real-time feedback, and give personalized guidance to students, thereby increasing student engagement and participation while adapting to the dynamic nature of group discussions and debates. For example, LLMs can be used to manage conversations and balance the preferences of group members in collective decision-making tasks (such as scheduling meetings) with fair consideration for all participants by extracting individual preferences and suggesting options that satisfy most group members [44].

Advertisement

4. Integration of generative AI in educational systems

4.1 Teacher’s journey from awareness to creation

Integrating GenAI tools into educational settings is not just an emerging trend; it’s rapidly becoming an essential part of modern teaching methodologies across multiple disciplines. However, teachers’ perspectives towards the adoption of these tools vary significantly, predominantly based on their awareness, understanding, and experience with GenAI technologies [45, 46]. This discussion explores educators’ perceptions regarding integrating GenAI in education, focusing on transitioning from awareness to classroom creation. Possible stages of adoption and integration are as follows.

Awareness: The initial phase involves teachers becoming aware of GenAI’s potential educational applications. At this stage, the primary emotion is curiosity mixed with a hint of skepticism, especially among those with limited background in technology. The challenge lies in transforming this curiosity into a genuine interest capable of driving further exploration. As GenAI continues to evolve, the argument for its inevitability in education strengthens; this phase is crucial for setting the groundwork for future engagement.

Learning: The learning phase is marked by teachers actively seeking knowledge about GenAI, its capabilities, and how it might be leveraged to enhance teaching and learning experiences. This stage is often accompanied by a range of emotions, from excitement at the possibilities to frustration over the learning curve of new technology. Professional development opportunities, workshops, and online resources support educators through this phase. Here, the divide between tech-savvy educators and those from non-IT backgrounds becomes most apparent, necessitating tailored learning paths to ensure inclusivity.

Familiarity: As educators gain hands-on experience with GenAI tools, their comfort level increases. This familiarity phase is characterized by growing confidence in using these technologies for specific, often limited, tasks within the educational context. For many, this is the stage where the potential of GenAI to transform aspects of teaching and research becomes tangible. Experiences shared among teachers through forums, collaborations, and professional networks further nurture this growing familiarity, turning apprehension into acceptance.

Creation: The final phase sees teachers actively integrating GenAI tools into their curriculum as an adjunct technology and a central component of their instructional strategies. This creation phase is where the full potential of GenAI integration is realized, with educators innovating new ways to engage students, personalize learning experiences, and streamline administrative tasks. Here, the emphasis shifts from understanding GenAI to leveraging its capabilities to foster more effective, efficient, and exciting educational environments.

4.2 Learner’s perspectives towards GenAI

Understanding the evolving landscape of educational technologies, especially with the integration of GenAI, mandates considering learners’ perspectives [47, 48]. As they navigate these technological advancements in their educational journey, their attitudes and skills development play a critical role. Here’s a focused exploration of the potential learner’s perspectives towards GenAI integration in education:

Enhancing learning efficiency: At the outset, learners perceive GenAI as a revolutionary tool, much like a “search engine 2.0,” which propels the efficiency of study practices to new heights. Unlike traditional search engines that return a vast array of links requiring further analysis, GenAI, such as ChatGPT, provides concise, tailored answers, significantly reducing the effort and time needed for research. Students may foster a positively inclined attitude towards these technologies, recognizing their potential as a study aid and an essential competency for their future careers. They understand that integrating GenAI technologies into their learning strategies can vastly enhance their understanding and retention of information.

Navigating the accuracy of GenAI: An essential perspective that learners must adopt is the understanding that GenAI tools are not infallible for all their advancements. The ability to discern and validate the information provided by GenAI becomes crucial. Hence, developing fact-checking skills is imperative for students, ensuring they can distinguish between accurate information and potential inaccuracies generated by these tools. This skill is particularly significant because GenAI sometimes creates plausible but erroneous content. Encouraging a critical approach towards accepting information will enable learners to utilize these technologies effectively, ensuring their reliance on GenAI complements rather than compromises their educational integrity.

Personalized learning support: GenAI technologies have the potential to revolutionize the concept of personalized learning through virtual 1:1 coaching. In this vision, GenAI is an ever-present tutor, available to address learners’ queries instantaneously, guide them through complex problems, and provide tailored learning experiences. This perspective views GenAI as a tool and a learning partner capable of fostering the self-paced practice of foundational skills in learners. Such 1:1 coaching can fill gaps in understanding, offering explanations and resources customized to each learner’s needs and pace. For this potential to be fully realized, the application of GenAI in education must be guided by ethical standards and pedagogical principles, ensuring that these technologies genuinely support and enhance the learning experience.

4.3 Administrative roadmap to integrate GenAI

Strategic planning for GenAI integration: Strategic planning forms the bedrock of successful GenAI integration within educational settings. Administrators must embark on a detailed analysis of the current technological infrastructure, identifying gaps and envisioning a roadmap for incorporating GenAI tools. This includes addressing hardware and software needs, internet bandwidth, and cybersecurity measures. The strategic plan should align with the institution’s broader educational goals, ensuring that GenAI adoption enhances, rather than disrupts, the learning experience. It may involve setting up pilot programs to test the effectiveness of GenAI tools in specific subjects or activities and gathering data to inform broader implementation strategies.

Comprehensive resource allocation: Beyond the initial enthusiasm for GenAI’s potential, administrators face the practical challenge of resource allocation. It goes beyond budget considerations to encompass a holistic approach that addresses equity, access, and sustainability. For instance, ensuring that all students, including those from underprivileged backgrounds or with special needs, have access to these technologies is crucial. Administrators may discuss partnerships with tech companies for donations or discounts, grants, and other financing models to support this equitable access. Moreover, resource allocation is not solely about technology procurement; it also encompasses investing in human capital—recruiting staff adept at blending GenAI tools with teaching methodologies or providing ongoing training for current educators.

Ethical considerations and digital citizenship: The ethical deployment of GenAI in education requires administrators to grapple with questions of data privacy, intellectual honesty, and the potential for technology misuse. Establishing a framework that promotes ethical use while encouraging innovation is a balancing act. Discussions may revolve around developing comprehensive policies that govern data use, consent protocols, and transparency measures. Furthermore, fostering a culture of digital citizenship, where students learn to use GenAI responsibly, becomes a shared mission. This could include curriculum updates to cover topics like digital ethics, privacy rights, and the implications of AI on society.

Professional development tailored to GenAI: Professional development is a cornerstone of effective GenAI integration, necessitating discussions on equipping teachers with the necessary skills and confidence to use these tools in the classroom. Training programs might need to be overhauled to include GenAI competencies, ranging from technical know-how to pedagogical strategies for integrating AI tools into lesson plans. Moreover, creating a supportive community where teachers can share experiences, challenges, and best practices is vital. Such initiatives could include internal workshops, online forums, or collaboration with external experts and institutions. The goal is to foster an environment where teachers feel empowered to innovate while ensuring that GenAI enhances the educational experience.

Policy formulation and student preparedness: Crafting policies that reflect the complexities of GenAI use in education is another pivotal discussion point. These policies must address academic integrity, ensuring students leverage GenAI as a learning aid without compromising their intellectual development. For instance, guidelines must clearly define how and when GenAI can be used for assignments, projects, and research. Additionally, preparing students for a future where AI is ubiquitous involves adjusting curricula to include critical thinking, problem-solving, and digital literacy skills. Administrators must consider how education can adapt to prepare students to use AI and understand its impact on society and the workforce.

Engaging the broader educational community: Ultimately, successful GenAI integration depends on the engagement of the entire educational community—students, parents, teachers, and staff. Administrators should lead efforts to educate these stakeholders on the benefits and challenges of GenAI, setting realistic expectations while addressing concerns and soliciting feedback. This may involve community forums, informational newsletters, and transparent reporting on pilot programs and initiatives. By fostering an inclusive dialog, administrators can build trust and enthusiasm for the transformative potential of GenAI in education.

Advertisement

5. Potential risks and challenges

5.1 Imperfection and polluted content

The proliferation of generative AI has risks and challenges, particularly in the educational context. A primary issue is the quality control of the content generated. While these AI systems can create high-quality educational materials, they can also produce inaccurate or misleading information, especially in scenarios not covered in their training datasets. This problem is related to AI “hallucination,” where the AI produces content that significantly deviates from factual accuracy, leaning more towards the AI’s interpretations rather than an accurate representation of the information. These hallucinations occur because AI systems, in their current form, do not possess proper knowledge or understanding [49, 50]. In practical terms, if an AI model is uncertain about the factual accuracy of a piece of information, e.g., a historical event, it may provide an educated guess rather than an accurate answer. Compared to carefully curated textbooks, AI-generated content, commonly sourced from the vast and varied Internet, often lacks the rigorous vetting necessary for educational purposes. This discrepancy is significant because the Internet, while a treasure trove of information, can also contain discriminatory and unethical language.

Generative AI’s (GenAI’s) ability to alter or fabricate images and videos, creating highly realistic ‘deepfakes,’ poses a unique challenge to educators and students alike. These deepfakes are increasingly indistinguishable from authentic materials, making it easier to produce and disseminate ‘fake news’ and other forms of misleading information. The absence of stringent regulations and robust monitoring systems means potentially biased or erroneous AI-generated materials are becoming more prevalent online, influencing one of the primary knowledge sources for learners globally. This situation mainly concerns young learners, who may need more background knowledge to discern inaccuracies or biases in AI-generated content. Moreover, there’s a legal aspect to consider: using content without proper consent or attribution can lead to copyright issues and undermine the integrity of educational resources. This issue also creates a recursive challenge for future AI models, as they might train on their AI-generated content from the Internet, perpetuating and amplifying existing biases and errors. This cycle underscores the need for more effective oversight and ethical standards in AI-generated content, particularly in educational contexts. This challenge is exacerbated in an educational setting where critical thinking and accurate information are foundational.

Addressing those mentioned challenges requires a multifaceted approach. A potential solution lies in applying reinforcement learning with human feedback (RLHF), a technique where AI models are fine-tuned based on evaluations of their outputs by human experts. This method allows the models to iteratively improve their content generation to align more closely with factual accuracy and ethical standards. Furthermore, developing and utilizing foundation educational datasets for fine-tuning LLMs can ensure that the AI systems have a solid base of verified and unbiased information, specifically tailored for educational purposes. These datasets would consist of meticulously curated and peer-reviewed educational content, encompassing various subjects and perspectives to minimize biases and inaccuracies. Additionally, implementing robust monitoring systems and ethical guidelines for AI-generated content can help safeguard against disseminating misleading information. Educators and developers must collaborate to integrate these solutions, ensuring AI tools enhance learning experiences while maintaining integrity and factual correctness. This approach addresses the immediate concerns of content quality and lays the groundwork for responsible and ethical AI use in education.

5.2 Ethical and societal concerns

Integrating AI into education requires careful consideration of ethical implications, especially in contexts where comprehensive governmental regulations may not fully address the rapid advancements in AI technologies [51, 52]. A pivotal concern is finding a harmonious balance between leveraging the benefits of AI-driven educational tools and safeguarding student privacy. The effectiveness of these tools largely depends on their ability to analyze extensive datasets, which poses significant questions regarding how student information is utilized. Educators, therefore, bear the critical responsibility of rigorously evaluating AI technologies before their implementation. This entails thoroughly examining the ethical frameworks governing AI tools to ensure they are in harmony with the core values of educational institutions. It also involves fostering collaborative efforts with administrators, parents, and other stakeholders to create a unified set of ethical standards. Additionally, educators must implement robust mechanisms for continuous assessment to ensure that AI applications consistently enhance the educational experience without detracting from it. This commitment includes ongoing professional development to stay informed on the evolving challenges and technological progress in AI’s role in education.

An ethical approach to integrating AI in education further includes considerations of accessibility and equity. It is imperative to guarantee that AI’s incorporation into the classroom does not deepen existing educational inequalities. Effective strategies must be established to ensure equitable access to AI technologies, aiming to reduce the achievement gap and prevent the expansion of the digital divide. This inclusive approach guarantees that students from diverse backgrounds can equally benefit from the innovations in educational technology [53]. Moreover, integrating a human-in-the-loop system is crucial to complement AI’s capabilities [54, 55]. This provides a mechanism for human oversight that ensures AI-driven decisions are appropriate and ethical. This human presence facilitates more accurate and context-aware responses and introduces an essential layer of accountability and transparency in AI applications in education.

Artificial intelligence (AI) systems can inherit and amplify biases present in their training data or design. In an educational context, this could lead to unfair treatment of students based on race, gender, socioeconomic status, or other factors. Incorporating mechanisms for human feedback is vital for identifying and eliminating biases and inaccuracies within AI systems. Educators and users should be empowered to report and correct biased information and errors, thereby improving the reliability and fairness of the system. This feedback loop is essential for cultivating an AI ecosystem that is both reflective and responsive to the diverse needs and values of its user base, promoting an educational environment that is equitable, inclusive, and grounded in ethical principles.

5.3 Environmental costs

Training large-scale generative AI models’ computational and environmental costs are considerable [56]. These models require extensive computational resources, leading to significant energy consumption and carbon dioxide (CO2) emissions. These models have demonstrated remarkable performance in various natural language processing tasks. Still, their increasing size has posed challenges for deployment and raised concerns about their environmental and economic impact due to high energy consumption. As the field progresses towards more complex and capable models, managing these costs and their environmental impacts is crucial. In the educational sector, this presents an opportunity to integrate discussions about sustainable AI practices into the curriculum, fostering a generation of students who are not only technologically proficient but also environmentally conscious.

Fortunately, scientists have achieved a breakthrough by proposing small language models (SLMs) and recent 1-bit LLMs [57, 58, 59, 60]. An SLM refers to a version of a language-processing AI with fewer parameters than larger models. In machine learning and AI, parameters are the parts of the model learned from the training data and determine the model’s behavior. Smaller language models are designed to be lighter and more efficient, making them easier to deploy on devices with limited computational resources, such as mobile phones or embedded systems. While they may not achieve the same level of performance or accuracy as larger models on complex language tasks, small language models can still understand and generate human-like text, perform classification tasks, and more.

Advertisement

6. Conclusions

This chapter delves into the technological underpinnings, diverse applications, challenges, and ethical considerations of integrating generative artificial intelligence into educational systems. It comprehensively explores how GenAI can enhance personalized learning, streamline administrative tasks, and offer innovative teaching and assessment tools while highlighting the need for a critical approach to ensure accuracy and equitable access. The chapter emphasizes the transformative potential of GenAI in education, from improving student engagement and learning outcomes to fostering professional development and addressing the digital divide.

In particular, GenAI holds great promise for education through personalized learning, automated content generation, virtual tutoring, language learning, creativity enhancement, and automated assessment. However, it also underscores the importance of addressing the imperfection and environmental impact of GenAI technologies and the necessity of continuous evaluation and adaptation to leverage GenAI responsibly and effectively in educational contexts. Effective implementation requires seamless integration of GenAI into existing curricula, training of teachers, addressing ethical concerns, design with a focus on student needs, and continuous evaluation of the impact on educational outcomes.

Additionally, it addresses ethical concerns and the necessity for robust policy frameworks to ensure equitable access and prevent misuse of AI technologies in education. The authors emphasize the aim of guiding educators and policymakers in making informed decisions to leverage GenAI to improve educational outcomes and foster a more inclusive, adaptive, and future-ready learning environment.

References

  1. 1. Farrelly T, Baker N. Generative artificial intelligence: Implications and considerations for higher education practice. Education Sciences. 2023;13(11):1-14. [Online]. Available from: https://www.mdpi.com/2227-7102/13/11/1109
  2. 2. Lo CK. What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences. 2023;13(4):1-15. [Online]. Available from: https://www.mdpi.com/2227-7102/13/4/410
  3. 3. Pinkwart N, Paaßen B, Burchardt A. Chancen, Potenziale und Grenzen von ChatGPT in der Bildung – Stellungnahme des DFKI Labor Berlin. 2023. Available from: https://www.dfki.de/web/news/chancen-potenziale-und-grenzen-von-chatgpt-in-der-bildung-stellungnahme-des-dfki-labor-berlin [Accessed: Feb 01, 2024]
  4. 4. Department of Education UK. Policy paper - Generative artificial intelligence (AI) in education. 2023. Available from: https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education#understanding-generative-ai [Accessed: Feb 10, 2024]
  5. 5. Cornell University. CU committee report: Generative artificial intelligence for education and pedagogy. 2023. Available from: https://www.teaching.cornell.edu/generative-artificial-intelligence/cu-committee-report-generative-artificial-intelligence-education [Accessed: Feb 10, 2024]
  6. 6. UNESCO. Guidance for Generative AI in Education and Research. France: United Nations Educational, Scientific and Cultural Organization; 2023
  7. 7. Cao Y, Li S, Liu Y, Yan Z, Dai Y, Yu PS, et al. A comprehensive survey of AI-generated content (AIGC): A history of generative AI from GAN to ChatGPT. arXiv preprint arXiv:2303.04226. 2023
  8. 8. Wu J, Gan W, Chen Z, Wan S, Yu PS. Multimodal large language models: A survey. arXiv preprint arXiv:2311.13165. 2023
  9. 9. Zhou K, Yang J, Loy CC, Liu Z. Learning to prompt for vision-language models. International Journal of Computer Vision. 2022;130(9):2337-2348
  10. 10. Sun Q , Cui Y, Zhang X, Zhang F, Yu Q , Luo Z, et al. Generative Multimodal Models Are in-Context Learners. arXiv preprint arXiv:2312.13286. [Online] 2023
  11. 11. Team G, Anil R, Borgeaud S, Wu Y, Alayrac J-B, Yu J, et al. Gemini: A family of highly capable multimodal models. arXiv preprint arXiv:2312.11805. 2023
  12. 12. DFKI, MMB. Schule und ki - lehren und lernen mit künstlicher intelligenz. 2023. Available from: https://www.dfki.de/web/news/telekom-stiftung-veroeffentlicht-leitfaden-schule-und-ki [Accessed: Feb 15, 2024]
  13. 13. Hübsch T, Vogel-Adham E, Vogt A, Wilhelm-Weidner A. Sprachgewandt in die zukunft: Large language models im dienst der beruflichen weiterbildung. In: ein beitrag der digitalbegleitung im rahmen des innovationswettbewerbs invite. Berlin: VDI/VDE Innovation + Technik GmbH; 2024. p. 46 S
  14. 14. Kasneci E, Sessler K, Küchemann S, Bannert M, Dementieva D, Fischer F, et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences. 2023;103:1-12
  15. 15. Holmes W, Bialik M, Fadel C. Artificial Intelligence in Education: Promises and Implications for Teaching Learning. Boston, MA: The Center for Curriculum Redesign; 2019
  16. 16. Han A, Cai Z. Design implications of generative AI systems for visual storytelling for young learners. In: IDC ‘23: Proceedings of the 22nd Annual ACM Interaction Design and Children Conference. New York, NY, United States: Association for Computing Machinery; Jun 2023. pp. 470-474. DOI: 10.1145/3585088.3593867
  17. 17. Jauhiainen JS, Guerra AG. Generative AI and ChatGPT in school Children’s education: Evidence from a school lesson. Sustainability. 2023;15(18):1-22. [Online]. Available from: https://www.mdpi.com/2071-1050/15/18/14025
  18. 18. Cao C, Ding Z, Lee G-G, Jiao J, Lin J, Zhai X. Elucidating stem concepts through generative AI: A multi-modal exploration of analogical reasoning. arxiv, vol. abs/2308.10454. 2023
  19. 19. Castillo-Segura P, Alario-Hoyos C, Kloos CD, Fernández Panadero C. Leveraging the Potential of Generative AI to Accelerate Systematic Litera-ture Reviews: An Example in the Area of Educational Technology, 2023 World Engineering Education Forum - Global Engineering Deans Council (WEEF-GEDC). Monterrey, Mexico: IEEE; 2023. pp. 1-8. DOI: 10.1109/WEEF-GEDC59520.2023.10344098
  20. 20. Jonsson M, Tholander J. Cracking the code: Co-coding with AI in creative programming education. In: Proceedings of the 14th Conference on Creativity and Cognition. New York, NY, United States: Association for Computing Machinery; Jun 2022. pp. 5-14. DOI: 10.1145/3527927.3532801
  21. 21. Future of Jobs Report 2023. Up to a quarter of jobs expected to change in next five years. 2023. Available from: https://www.weforum.org/press/2023/04/future-of-jobs-report-2023-up-to-a-quarter-of-jobs-expected-to-change-in-next-five-years [Accessed: Feb 10, 2024]
  22. 22. Wright JV. A new era for AI and google workspace. 2023. [Online]. Available from: https://www.workspace.google.com/blog/product-announcements/generative-ai [Accessed: Feb 10, 2024]
  23. 23. Spataro J. Introducing microsoft 365 copilot – your copilot for work. In: Proceedings of the XXXVII Brazilian Symposium on Software Engineering. New York, NY, United States: Association for Computing Machinery; Sep 2023. DOI: 10.1145/3613372.3614197. Available from: https://www.blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/ [Accessed: Feb 10, 2024]
  24. 24. Github Copilot. Available from: https://www.github.com/features/copilot [Accessed: Feb 15, 2024]
  25. 25. Zhao WX, Zhou K, Li J, Tang T, Wang X, Hou Y, et al. A survey of large language models. arxiv. vol. abs/2303.18223. 2023
  26. 26. Zhang P, Boulos MNK. Generative AI in Medicine and Healthcare: Promises, Opportunities and Challenges Future Internet. 2023;15(9):286. pp. 1-15. DOI: 10.3390/fi15090286
  27. 27. Eysenbach G. The role of ChatGPT, generative language models, and artificial intelligence in medical education: A conversation with ChatGPT and a call for papers. JMIR Medical Education. 2023;9:1-13
  28. 28. Pavlik J. Collaborating with ChatGPT: Considering the implications of generative artificial intelligence for journalism and media education. Journalism Mass Communication Educator. 2023;78:84-93
  29. 29. Buyser BD. Goblin Tools. 2024. [Retrieved 12.02.2024]. [Online]. Available from: https://goblin.tools
  30. 30. Kinney EM, Vedora J, Stromer R. Computer-presented video models to teach generative spelling to a child with an autism spectrum disorder. Journal of Positive Behavior Interventions. 2003;5:22-29
  31. 31. Engel C, Schmalfuß-Schwarz J, Gollasch D, Branig M, Dirks S, Weber G. Workshop on designing accessible extended reality: An opportunity for people with disabilities and disorders. In: Mensch Und Computer 2023 – Workshopband. New York, NY, United States: Association for Computing Machinery; 2023
  32. 32. KiwiTech. Applications of generative AI in augmented and virtual reality. 2023. Available from: https://www.medium.com/@KiwiTech/applications-of-generative-ai-in-augmented-and-virtual-reality-20cecec50886 [Accessed: Feb 10, 2024]
  33. 33. Gollasch D, Branig M, Gerling K, Gulliksen J, Metatla O, Spiel K, et al. Designing technology for neurodivergent self-determination: Challenges and opportunities. In: Nocera JA, Lárusdóttir MK, Petrie H, Piccinno A, Winckler M, editors. Human-Computer Interaction – Interact 2023. Cham: Springer Nature Switzerland; 2023. pp. 621-626
  34. 34. Gadiraju V, Kane SK, Dev S, Taylor AS, Wang D, Denton E, et al. “I wouldn’t say offensive but…”: Disability-centered perspectives on large language models. In: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, United States: Association for Computing Machinery; Jun 2023. pp. 205-216
  35. 35. Atlas S. ChatGPT for Higher Education and Professional Development: A Guide to Conversational AI. 2023. Available from: https://www.digitalcommons.uri.edu/cba_facpubs/548 [Accessed: Feb 15, 2024]
  36. 36. Abd-alrazaq AA, AlSaad R, Alhuwail D, Ahmed A, Healy P, Latifi S, et al. Large language models in medical education: Opportunities, challenges, and future directions. JMIR Medical Education. 2023;9
  37. 37. Rüdian S, Pinkwart N. Auto-generated language learning online courses using generative AI models like ChatGPT. In: 21. Fachtagung Bildungstechnologien (DELFI). Bonn: Gesellschaft für Informatik e.V; 2023. pp. 65-76
  38. 38. Phung T, Pădurean V-A, Cambronero J, Gulwani S, Kohn T, Majumdar R, et al. Generative AI for programming education: Benchmarking ChatGPT, GPT-4, and human tutors. In: Proceedings of the 2023 ACM Conference on International Computing Education Research. Vol. 2. New York, NY, USA: Association for Computing Machinery; 2023
  39. 39. Pinto G, Cardoso-Pereira I, Monteiro D, Lucena D, Souza A, Gama K. Large language models for education: Grading open-ended questions using ChatGPT. In: SBES ‘23: Proceedings of the XXXVII Brazilian Symposium on Software Engineering. New York, NY, United States: Association for Computing Machinery; 2023. pp. 293-302. DOI: 10.1145/3613372.3614197
  40. 40. Lagakis P, Demetriadis S, Psathas G. Automated Grading in Coding Exercises Using Large Language Models. In: Auer ME, Tsiatsos T, editors. Smart Mobile Communication & Artificial Intelligence. IMCL 2023. Lecture Notes in Networks and Systems. Vol. 936. Cham: Springer; 2023. pp. 363-373. DOI: 10.1007/978-3-031-54327-2_37
  41. 41. Agostini D, Picasso F. Large language models for sustainable assessment and feedback in higher education: Towards a pedagogical and technological framework. In: Proceedings of the First International Workshop on High-Performance Artificial Intelligence Systems in Education Co-Located with 22nd International Conference of the Italian Association for Artificial Intelligence (AIxIA 2023). CEUR Workshop Proceedings (CEUR-WS.org). 2023
  42. 42. Rashid SF, Goertz L, Reichow I. Metadata for learning processes – Results of an international interview study. Preprints. 2024. [Online]. DOI: 10.20944/preprints202402.0889.v1
  43. 43. Schick T, Dwivedi-Yu J, Jiang Z, Petroni F, Lewis P, Izacard G, et al. Peer: A collaborative language model. arxiv. vol. abs/2208.11663. 2022
  44. 44. Papachristou M, Yang L, Hsu C-C. Leveraging large language models for collective decision-making. arxiv. vol. abs/2311.04928. 2023
  45. 45. Kaplan-Rakowski R, Grotewold K, Hartwick P, Papin K. Generative AI and teachers’ perspectives on its implementation in education. Journal of Interactive Learning Research. 2023;34(2):313-338
  46. 46. Barrett A, Pack A. Not quite eye to AI: Student and teacher perspectives on the use of generative artificial intelligence in the writing process. International Journal of Educational Technology in Higher Education. 2023;20(1):59
  47. 47. Chan CKY, Hu W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. arXiv preprint arXiv:2305.00290. 2023
  48. 48. Lee AVY, Tan SC, Teo CL. Designs and practices using generative AI for sustainable student discourse and knowledge creation. Smart Learning Environments. 2023;10(1):59
  49. 49. Rawte V, Sheth A, Das A. A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922. 2023
  50. 50. Ji Z, Lee N, Frieske R, Yu T, Su D, Xu Y, et al. Survey of hallucination in natural language generation. ACM Computing Surveys. 2023;55(12):1-38
  51. 51. Sharples M. Towards social generative AI for education: Theory, practices and ethics. Learning: Research and Practice. 2023;9(2):159-167
  52. 52. Head CB, Jasper P, McConnachie M, Raftree L, Higdon G. Large language model applications for evaluation: Opportunities and ethical implications. New Directions for Evaluation. 2023;2023(178-179):33-46
  53. 53. Holmes W, Porayska-Pomsta K. The Ethics of Artificial Intelligence in Education: Practices, Challenges, and Debates. New York: Taylor & Francis; 2022. Available from: https://www.taylorfrancis.com/books/edit/10.4324/9780429329067/ethics-artificial-intelligence-education-wayne-holmes-ka%C5%9Bka-porayska-pomsta
  54. 54. Mosqueira-Rey E, Hernández-Pereira E, Alonso-Ríos D, Bobes-Bascarán J, Fernández-Leal Á. Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review. 2023;56(4):3005-3054
  55. 55. Amirizaniani M, Yao J, Lavergne A, Okada ES, Chadha A, Roosta T, et al. Developing a framework for auditing large language models using human-in-the-loop. arXiv preprint arXiv:2402.09346. 2024
  56. 56. Faiz A, Kaneda S, Wang R, Osi R, Sharma P, Chen F, et al. LLMcarbon: Modeling the end-to-end carbon footprint of large language models. arXiv preprint arXiv:2309.14393. 2023:1-30
  57. 57. Schick T, Schütze H. It’s not just size that matters: Small language models are also few-shot learners. arXiv preprint arXiv:2009.07118. 2020
  58. 58. Kang M, Lee S, Baek J, Kawaguchi K, Hwang SJ. Knowledge-augmented reasoning distillation for small language models in knowledge-intensive tasks. Advances in Neural Information Processing Systems. 2024;36
  59. 59. Mitra A, Del Corro L, Mahajan S, Codas A, Simoes C, Agarwal S, et al. Orca 2: Teaching small language models how to reason. arXiv preprint arXiv:2311.11045. 2023
  60. 60. Ma S, Wang H, Ma L, Wang L, Wang W, Huang S, et al. The era of 1-bit LLMS: All large language models are in 1.58 bits. arXiv preprint arXiv:2402.17764. 2024

Written By

Sheikh Faisal Rashid, Nghia Duong-Trung and Niels Pinkwart

Submitted: 15 March 2024 Reviewed: 31 March 2024 Published: 20 May 2024