Fostering preservice science teachers’ AI-TPACK competence and reflections through an AI-focused pedagogical learning course

FOSTERING PRESERVICE SCIENCE TEACHERS’ AI-TPACK COMPETENCE AND REFLECTIONS THROUGH AN AI-FOCUSED PEDAGOGICAL LEARNING COURSE

Ronilo Antonio

College of Professional Teacher Education, Bulacan State University (Philippines)

Received July 2025

Accepted October 2025

Abstract

The rapid emergence of artificial intelligence (AI) in education underscores the imperative to equip future educators with the competencies needed to meaningfully integrate AI into instruction. Hence, this study aimed to effectively cultivate Technological Pedagogical Content Knowledge for AI (AI-TPACK) competence among preservice science teachers (PSSTs) through the implementation of an AI-focused pedagogical learning course. Anchored on social constructivist principles, the course integrated structured instruction, hands-on activities, collaborative lesson design, and reflective practices to enhance both PSSTs’ understanding and practical application of AI in science education. A pre-experimental one-group pretest-posttest design was employed with 84 PSSTs enrolled in a state university in the Philippines. Quantitative data were collected using an adapted AI-TPACK scale, while qualitative insights were gathered through structured interviews. Non-parametric analysis using the Wilcoxon Signed-Ranks Test revealed statistically significant improvements across all dimensions of AI-TPACK, namely technological knowledge, pedagogical applications, ethical considerations, and integrated competence (z = -6.900, < .001, r = 0.76), indicating a large effect size. Thematic analysis of PSSTs’ reflections identified key affordances such as enhanced pedagogical design, improved AI literacy, and increased student engagement, alongside constraints related to tool limitations, ethical dilemmas, and contextual barriers. The findings highlight the potential of strategically designed teacher education interventions in fostering AI-TPACK competence. The study contributes empirical evidence and pedagogical insights for advancing AI integration in pre- and in-service teacher training, reinforcing the need for intentional design, policy alignment, and continued research on equitable and context-sensitive AI adaption in science education.

 

Keywords – Artificial intelligence, Technological pedagogical content knowledge, Preservice science teachers, Science education, Educational technology, Course-based intervention.

To cite this article:

Antonio, R. (2025). Fostering preservice science teachers’ AI-TPACK competence and reflections through an AI-focused pedagogical learning course. Journal of Technology and Science Education, 15(3), 784809. https://doi.org/10.3926/jotse.3693

 

----------

    1. 1. Introduction

As artificial intelligence (AI) continues to reshape the global education landscape, its integration into teaching and learning processes has emerged as a critical frontier in educational innovation (Luckin, Holmes, Griffiths & Forcier, 2016; Yadav, 2024; UNESCO, 2021). Applications such as adaptive tutoring systems, automated assessment tools, and generative content platforms offer opportunities to personalize learning, support inquiry, and provide data-driven insights for educators (Chiu, Chai, Liang & Tsai, 2022; Xie, Chu, Hwang & Wang, 2021). In science education, these tools are particularly valuable for fostering problem-solving, critical thinking, and inquiry-based practices, which are central to 21st-century scientific literacy (Almasri, 2024; Mafarja, Zulnaidi & Mohamad, 2025).

Amid these developments, teacher preparation plays a pivotal role in ensuring that future educators are not only technologically competent but also pedagogically and ethically equipped to integrate AI into their practice (Celik, 2023; Gatlin, 2023; Meylani, 2024). However, research highlights that many preservice teachers still lack sufficient pedagogical strategies and ethical awareness to make informed decisions about AI use in classrooms (Ayanwale, Adelana, Molefi, Adeeko & Ishola, 2024; Black, George, Eguchi, Dempsey, Langran, Fraga et al., 2024; Harakchiyska & Vassilev, 2024). Preparing preservice science teachers (PSSTs), therefore, requires moving beyond basic digital literacy toward developing AI-TPACK competence, which integrates content, pedagogy, technology, and ethics (Celik, 2023; Mishra & Koehler, 2006).

The present study is situated within efforts to strengthen teacher education programs in the Philippines and the Global South more broadly, where challenges such as digital inequities and contextual barriers persist (Kuo-Hsun-Ma, 2007; UNESCO, 2021; Schopp, Schelenz, Heesen & Pawelec, 2019). By examining preservice teachers’ engagement with an AI-focused pedagogical learning course, this research contributes to the growing discourse on how higher education can prepare future-ready educators for the age of AI.

2. Literature Review

2.1. The Evolving Role of Artificial Intelligence in Education

Artificial intelligence (AI) is increasingly transforming educational landscapes worldwide, reshaping how teaching is delivered, how learners engage with content, and how data is used to inform instruction (Antonio & Sison, 2026; Prajapati, 2024; Wang & Huang, 2025). AI technologies such as intelligent tutoring systems, automated essay scoring, predictive analytics, and generative content tools are becoming integrated into educational platforms, offering the potential to support personalized learning, formative assessment, and differentiated instruction (Holmes, Bialik & Fadel, 2022; Luckin et al., 2016). In science education in particular, AI supports simulations, inquiry-driven experimentation, and data visualization, which are critical components of promoting scientific literacy, critical thinking, and problem-solving (Almasri, 2024; Chiu et al., 2022; Mafarja et al., 2025; Xie et al., 2021).

Despite these innovations, the educational application of AI remains largely at the periphery in many national contexts. Zawacki-Richter, Marín, Bond and Gouverneur (2019), in their systematic review, noted that while AI technologies are maturing rapidly, their integration in actual classroom practice remains underdeveloped, especially in preservice teacher education. Most AI in education (AIED) initiatives have been developed for use on students rather than with teachers, often excluding teacher agency and decision‑making from the design and deployment processes (Holmes et al., 2022; Lepage & Collin, 2024).

Moreover, in the Global South, AI is still viewed as a technological abstraction rather than a pedagogical tool. This is particularly true in the Philippines, where while national frameworks emphasize digital transformation and STEM innovation, the actual AI-readiness of schools and teacher education institutions remains fragmented (Estrellado & Miranda, 2023; Lacuna, 2025; UNESCO, 2021). As AI becomes increasingly embedded in learning environments, it is critical that teachers are not only equipped to use these tools but are also prepared to make pedagogically sound and ethically informed decisions about their implementation.

2.2. Teacher Preparedness and Competence for AI Integration

The capacity of teachers to meaningfully integrate AI into instruction is increasingly acknowledged as a critical determinant of successful AI adoption in education (Attach, Etim, Inyang, Oguzie & Ekong, 2024). Yet, empirical studies consistently reveal that many teachers, particularly preservice teachers, lack the foundational knowledge, pedagogical reasoning, and evaluative skills necessary for effective and responsible AI integration (Ayanwale et al., 2024; Black et al., 2024; Harakchiyska & Vassilev, 2024; Nuangchalerm, Prachagool, Saregar & Yunus, 2024; Özer-Altınkaya & Yetkin, 2025). This gap is often rooted in teacher preparation programs that either marginally address or altogether overlook the role of AI in contemporary pedagogy, leaving future educators underprepared for the rapidly evolving digital learning landscape (Harakchiyska & Vassilev, 2024; Weiner, Lake & Rosner, 2024).

Compounding this issue is the observed tendency among some preservice teachers to exhibit overconfidence in the capabilities of AI. Several studies have reported a growing reliance on AI‑generated content without adequate critical evaluation, leading to surface-level integration practices (Aga, Sawyer & Wolfe, 2024; Sawyer, 2024). For example, Sawyer (2024) found that many preservice teachers were inclined to merely adjust the aesthetic aspects of AI-generated materials, such as layout or visuals, rather than interrogate the accuracy, pedagogical relevance, or alignment with curricular goals. This lack of discernment risks undermining the quality of instruction and perpetuates uncritical adoption of AI tools.

In response, scholars emphasize the pressing need for teacher education programs and professional development initiatives to address not only the operational use of AI tools but also their pedagogical appropriateness and ethical implications (Aga et al., 2024; Estrellado & Miranda, 2023; Lacuna, 2025). Training must go beyond technical proficiency to cultivate preservice teachers’ critical engagement with AI systems, including awareness of algorithmic bias, data privacy, and the limitations of machine-generated content (Gatlin, 2023). As AI continues to shape the educational environment, preparing future teachers to be both discerning users and reflective practitioners is of critical importance.

2.3. Conceptualizing AI-TPACK: Extending TPACK for Ethical and Integrated AI Use

The TPACK framework has long served as a foundation for designing teacher preparation programs that emphasize the integration of technology in pedagogically meaningful ways (Mishra & Koehler, 2006). However, the emergence of AI in education has prompted scholars to re-examine and extend TPACK to accommodate the distinct complexities posed by AI technologies. Unlike conventional edtech tools, AI systems require teachers to grapple with questions of algorithmic bias, transparency, data ethics, and learner agency (Floridi & Cowls, 2022; Singh & Thakur, 2024; Suna, Suchismita & Das, 2025; Zawacki‑Richter et al., 2019).

Celik (2023) proposed the Intelligent-TPACK framework, which incorporates five interconnected dimensions: Intelligent Technological Knowledge (Intelligent-TK), Technological Pedagogical Knowledge (Intelligent-TPK), Technological Content Knowledge (Intelligent-TCK), integrated TPACK, and AI Ethics. This extended model accounts for the evolving responsibilities of teachers, not only as instructional designers but also as ethical gatekeepers. It emphasizes critical AI literacy, the ability to evaluate the fairness and inclusiveness of AI systems, and the competence to maintain pedagogical control over AI-generated content. However, empirical validations of this model in actual teacher education programs are still in their infancy. There is a need for intervention studies that operationalize this framework and explore how AI-TPACK can be meaningfully cultivated among future educators.

2.4. Course Design Features Toward AI-TPACK Development

Teacher education programs aiming to develop AI-TPACK competence must move beyond theoretical instruction to provide authentic, practice-based experiences. Scholars agree that immersive, scaffolded learning environments that involve exploration, collaboration, and reflection are most effective in promoting technology integration competence (Alayyar, 2011; Antonio, 2025; Antonio & Prudente, 2025; Oktaviani & Utami, 2024). In the context of AI, this includes opportunities to evaluate and apply AI tools within lesson planning, simulate instructional scenarios, critique peer outputs, and reflect on ethical and pedagogical implications.

Several studies suggest that AI competence is most effectively developed when preservice teachers engage in design-based activities, such as creating AI-integrated lesson plans and adapting existing AI tools to specific learning contexts (Antonio, 2025; Antonio & Prudente, 2025; O’Neill, 2025; Park, 2024). Furthermore, integrating frameworks like SAMR (Substitution, Augmentation, Modification, Redefinition) can help scaffold teachers’ understanding of AI’s transformative potential in instruction (Puentedura, 2014). Despite these insights, few interventions have explicitly focused on developing AI-TPACK through integrated pedagogical courses.

2.5. Research Gap

Despite growing discourse on AI in education, substantial research gaps remain. First, the literature lacks rigorous empirical studies that assess the effectiveness of pedagogical interventions specifically designed to cultivate AI-TPACK among preservice teachers (Celik, 2023; Antonio, 2025). Second, most existing research originates from high-resource contexts, with limited representation from the Global South, where technological, institutional, and socio-cultural variables significantly shape AI adoption (Estrellado & Miranda, 2023; Lacuna, 2025; UNESCO, 2021). Moreover, there is insufficient attention to preservice teachers’ reflective experiences, specifically how they perceive the value, challenges, and ethical nuances of integrating AI into instruction (Ayanwale et al., 2024; Black et al., 2024). Understanding these perspectives is critical to designing responsive and inclusive teacher education curricula that promote agency, confidence, and professional identity.

The present study addresses these gaps by designing and implementing an AI-focused pedagogical learning course grounded in the AI-TPACK framework. Conducted in the Philippine context, it investigates how such a course enhances PSSTs’ competence across cognitive, pedagogical, and ethical domains, while also exploring the affordances and encumbrances they encounter. In doing so, the study contributes to the empirical foundation for AI integration in teacher education, particularly in underrepresented regions.

3. Research Questions

  1. 1.To what extent does participation in an AI-focused pedagogical learning course enhance PSSTs’ perceived AI-TPACK competence? 

  2. 2.What affordances and encumbrances do PSSTs’ encounter during the implementation of the AI‑focused pedagogical learning course? 

4. Theoretical Framework

To address the gaps, the present study is theoretically anchored on three complementary frameworks mainly TPACK, AI-TPACK, and social constructivism, which collectively shaped the design of the AI‑focused pedagogical course and informed both data collection and analysis. TPACK (Mishra & Koehler, 2006) serves as the foundational model, emphasizing the dynamic intersection of content, pedagogy, and technology in teaching. It establishes the baseline expectation that preservice science teachers must integrate technological tools in ways that preserve disciplinary integrity and pedagogical soundness. Building on this, AI-TPACK (Celik, 2023) extends the original framework by adding the critical dimension of AI ethics, foregrounding issues of fairness, transparency, and inclusivity (Floridi & Cowls, 2022). This lens reframed PSSTs not merely as tool users but as ethical decision-makers who must critically evaluate the implications of AI for instruction and equity. Finally, social constructivism (Vygotsky, 1978; Lave & Wenger, 1991) underpins the collaborative and reflective design of the course. By situating learning within authentic, dialogical, and socially mediated contexts, it emphasizes that AI-TPACK competence emerges not in isolation but through scaffolded interactions, peer critique, and shared construction of lesson exemplars. Taken together, these frameworks justified the use of a design-based pedagogical intervention: TPACK provided the structural foundation, AI-TPACK highlighted emerging ethical and technological complexities, and social constructivism offered the pedagogical rationale for embedding collaboration, inquiry, and reflection. This theoretical grounding ensured that the course was not merely technological training but a holistic, ethically aware, and socially situated preparation for AI‑enhanced science teaching.

5. Methodology

5.1. Research Design

This study employed a pre-experimental research design, specifically a one-group pretest–posttest design, to examine the development of PSSTs’ AI-TPACK perceived competence after participating in an AI‑focused pedagogical learning course. Pre-experimental designs are considered preliminary or exploratory because they involve limited control over extraneous variables, often relying on a single group without randomization or a comparison group (Cohen, Manion & Morrison, 2018). The one-group pretest–posttest design, in particular, measures the same participants before and after an intervention to determine whether significant changes have occurred, making it suitable for classroom-based interventions where random assignment may not be feasible (Emerson, 2016).

Both quantitative and qualitative data were utilized to provide a more comprehensive understanding of the outcomes. The quantitative component involved the administration of an adapted AI-TPACK questionnaire before and after the intervention to measure changes in PSSTs’ perceived competence (Celik, 2023). Meanwhile, the qualitative component drew on data from structured interviews conducted post-intervention, which aimed to capture the PSSTs’ reflective insights and perceptions of their learning experience.

5.2. Research Locale and Participants

The study was conducted at a state university in Central Luzon, Philippines. Participants were 84 PSSTs enrolled in the Bachelor of Secondary Education, major in Science program. All were officially enrolled in the course Technology for Teaching and Learning 2, a required component of the Professional Education curriculum. The intervention, an AI-focused pedagogical learning course, was integrated into their regular academic activities during the second semester of Academic Year 2024-2025. Participants were purposively selected, as the course content was directly aligned with the objectives of the study, specifically the examination of PSSTs’ AI-TPACK development. The sample size (n = 84) exceeds the minimum threshold typically recommended for pretest–posttest designs, thereby ensuring statistical validity (Creswell & Creswell, 2017). It also provides sufficient power to detect medium-to-large effect sizes (Cohen, 1988), making it appropriate for the study’s objectives.

5.3. Research Instrument

5.3.1. AI-TPACK Survey Questionnaire

This study utilized AI-TPACK Survey Questionnaire, adapted from the Intelligent-TPACK Scale developed by Celik (2023) to assess PSSTs’ knowledge for instructional use of AI, anchored on the TPACK framework and extended to include ethical considerations. In this instrument, 27 items were distributed across five dimensions: (1) AI-TK; (2) AI-TPK; (3) AI-TCK; (4) AI-TPACK, and; (5) AI‑Ethics. The AI-TK subscale measures knowledge of how to interact with AI-based tools and utilize their core functionalities. The AI-TPK dimension addresses pedagogical knowledge of AI-based tools, including their affordances for providing personalized feedback and monitoring student learning. AI-TCK focuses on content-specific knowledge of AI tools relevant to science education. At the core of the framework, AI-TPACK measures the integrated knowledge needed to select and apply suitable AI tools (e.g., intelligent tutoring systems) in alignment with pedagogical strategies and content goals. Finally, the AI-Ethics component evaluates PSSTs’ capacity to make informed decisions about AI tool usage based on principles of transparency, fairness, accountability, and inclusiveness.

Responses were collected using a four-point Likert scale with the following options: Strongly Agree, Agree, Disagree, and Strongly Disagree. In terms of reliability, the original validation study reported high internal consistency across all subscales. The Cronbach’s alpha coefficients were 0.856 for AI-TK, 0.858 for AI‑TPK, 0.868 for AI-TCK, 0.895 for AI-TPACK, and 0.864 for AI-Ethics, indicating that all dimensions of the scale demonstrated strong internal reliability. Reliability analysis was likewise conducted using the sample in the present study, which yielded a Cronbach’s alpha of 0.889. This value indicates excellent internal consistency, confirming that the instrument is reliable for assessing PSSTs’ perceived AI-TPACK competence. For clarity and transparency, all 27 items of the instrument are presented in the Table 1 organized according to their dimensions.

Dimensions

Statements

AI-Technological Knowledge (AI‑TK)

  1. 1.I am familiar with AI-based tools and their features. 

  1. 2.I have sufficient knowledge to effectively use AI tools. 

  1. 3.I can initiate tasks using AI-based tools via commands. 

  1. 4.I can perform specific tasks using AI-based tools. 

  1. 5.I am confident in using AI-based tools for everyday tasks. 

AI-Technological Pedagogical Knowledge (AI‑TPK)

  1. 6.I can select AI-based tools that support student motivation. 

  1. 7.I can use AI-generated alerts or notifications to guide student learning. 

  1. 8.I can interpret feedback from AI tools to provide timely instructional responses. 

  1. 9.I can use AI-based tools to monitor student learning. 

  1. 10.I can select AI-based tools that help students apply their knowledge. 

  1. 11.I can evaluate feedback from AI-based tools to improve teaching and learning. 

  1. 12.I understand how AI-based tools can enhance teaching and learning in my field. 

AI-Technological Content Knowledge (AI‑TCK)

  1. 13.I can effectively use subject-specific AI tools (e.g., digital simulations or generative AI). 

  1. 14.I can use AI-based tools to deepen my understanding of subject content. 

  1. 15.I am aware of AI-based tools used by professionals in my teaching field. 

  1. 16.I can use AI-based tools to find relevant educational resources in my subject area. 

AI-Technological Pedagogical Content Knowledge (AI‑TPCK)

  1. 17.I can select appropriate AI tools for monitoring and assessing students’ learning. 

  1. 18.I feel confident in taking a leadership role in integrating AI tools into subject teaching. 

  1. 19.I can design lessons that combine content, pedagogy, and AI-based technologies. 

  1. 20.I can integrate AI tools into lessons using a variety of teaching strategies. 

  1. 21.I can apply AI tools to deliver real-time feedback during instruction. 

  1. 22.I can use AI tools to support personalized learning. 

  1. 23.I can use AI-based tools to provide adaptive feedback in my subject area. 

AI-Ethics

  1. 24.I can identify the developers or responsible parties behind the design and decisions of AI tools. 

  1. 25.I understand the reasoning behind decisions made by AI tools. 

  1. 26.I am aware of how to assess the fairness of AI tools toward all students. 

  1. 27.I can evaluate whether AI tools consider individual differences such as race, gender, or learning needs. 

Table 1. Dimensions and Statements of the Perceived AI-TPACK Competency Survey Instrument

5.3.2. Structured Interviews

Following the implementation of the course, PSSTs were invited to participate in structured interviews administered through Google Forms, which served as the platform for data collection. Although the interviews followed a structured format, participants were encouraged to respond openly and elaborate on their insights in their own words. The prompts focused on the perceived affordances and challenges of the course, particularly in relation to the development of AI-integrated lesson exemplars and the enhancement of AI-TPACK competence. To ensure content validity and relevance, the interview protocol was reviewed and validated by two experts in science education and instructional technology. The interview consisted of four open-ended questions: (1) whether participants believed their AI-TPACK competency had improved as a result of the course and why; (2) which part of the course helped them most in understanding how AI tools can support teaching and learning, and; (3) what challenges they encountered in understanding or applying AI-based tools during the course, especially in the development of lesson exemplars, and how they addressed these challenges. Responses were analyzed using Braun and Clarke’s (2006) six-phase framework for thematic analysis and were evaluated based on clarity, relevance to AI-TPACK dimensions and course objectives. This analytic process complemented the quantitative results and provided deeper insight into the participants’ reflections on the AI-focused pedagogical learning course.

5.4. Data Gathering Procedures

5.4.1. Pre-Implementation

Prior to the implementation of the intervention, preparatory steps were undertaken to ensure the relevance and rigor of the research process. The course Technology for Teaching and Learning II, which served as the instructional platform for the study, was contextualized to focus on AI integration in science education. The course content was reviewed by science education experts and refined to align with the objectives of developing PSSTs’ AI-TPACK competence.

To uphold ethical standards, informed consent was obtained from all PSSTs, ensuring their voluntary participation and understanding of the study’s purpose, procedures, and confidentiality measures. Once consent was secured, the pre-administration of the research instrument, the AI-TPACK survey questionnaire, was administered to assess the PSSTs’ baseline knowledge and perceptions prior to their engagement in the course.

5.4.2. Implementation Phase

The AI-focused pedagogical learning course was implemented over a period of 14 weeks through a combination of in-person and online sessions, with each meeting lasting approximately three hours. The course was delivered to PSSTs as part of their Technology for Teaching and Learning II class and was designed to balance theoretical foundations with practical applications of AI in science education.

A major goal of the course was to enable PSSTs to develop AI-integrated lesson exemplars that demonstrate effective instructional design aligned with the TPACK framework. This culminating activity served as the application of all key learnings acquired throughout the course. Prior to the implementation phase, PSSTs were grouped by the instructor into teams of 5 to 6 members, and each group selected a science topic of their choice from the Revised K to 12 Curriculum to serve as the basis for their lesson exemplar.

Over a span of the 14-week period, the instructional content covered eight core topics: (1) The TPACK Framework, (2) The SAMR Model, (3) Integrating AI Tools in Science Education, (4) The Community of Inquiry (CoI) Framework, (5) Inquiry-Based Learning and Educational Technology, (6) Universal Design for Learning (UDL), (7) The Revised K to 12 Science Curriculum, and (8) Developing Science Lesson Exemplars. Topics were strategically sequenced to build foundational knowledge before engaging PSSTs in more advanced integration and design tasks.

The course design was grounded in four core pedagogical principles: instructional, experiential, collaborative, and reflective, as proposed by Antonio and Prudente (2025). Instructionally, the course provided clear learning objectives and delivered key concepts through interactive lectures, scenario-based analysis, video and case studies, and structured discussions. To support experiential learning, PSSTs engaged in hands-on activities using AI tools and digital platforms, allowing them to explore real-world applications in science instruction (Figure 1). A collaborative learning environment was cultivated through workshops, group tasks, peer feedback sessions, and mini symposia. These activities emphasized co-construction of knowledge, particularly during the development of lesson exemplars, which were aligned with both the Revised K to 12 Curriculum and the core ideas of AI-TPACK integration. The culminating activity allowed PSSTs to present their AI-integrated lesson exemplars to their peers and instructor, and to engage in constructive critique sessions. This provided an opportunity for peer and instructor feedback, promoting deeper understanding, professional dialogue, and refinement of instructional practices. Finally, reflective practice was embedded throughout the course. PSSTs were regularly encouraged to share insights and articulate their evolving understanding of AI integration in teaching, which helped deepen their learning by connecting theory with personal and professional experience.

 
 

a)

b)

 
 

c)

d)

Figure 1. Workshop Demonstration Activities on AI Tools in Science Teaching (a-d)

5.4.3. Post-Implementation Phase

After the completion of the 14-week AI-focused pedagogical learning course, data were gathered to assess the impact of the intervention on PSSTs’ AI-TPACK development. During this post‑implementation phase, the AI-TPACK survey questionnaire was re-administered to all participants to measure changes in their perceived competence in using AI tools for instructional purposes within the TPACK framework.

In addition to the quantitative data collection, structured interviews were conducted with selected PSSTs to gain deeper insights into their experiences throughout the course. The interviews explored their perceptions of the course content, the affordances and encumbrances they encountered during the development of AI-integrated lesson exemplars, and their reflections on the applicability of AI in science education. These interviews provided valuable qualitative data to contextualize and complement the results of the post-course assessment.

Figure 2 illustrates the three-phase intervention procedure of the AI-focused pedagogical learning course, comprising pre-implementation, implementation, and post-implementation activities.

 

Figure 2. Overview of the intervention procedure for the AI-focused pedagogical learning course

5.5. Data Analysis

The study employed both quantitative and qualitative data analysis procedures to examine the effects of the AI-focused pedagogical learning course on PSSTs’ perceived AI-TPACK competence and to explore their experiences and reflections throughout the intervention. For the quantitative analysis, both descriptive and inferential statistics were used. Descriptive statistics summarized the pretest and posttest scores across all dimensions of AI-TPACK competence, while inferential statistics were conducted to determine the significance of observed changes. To assess the suitability of parametric statistical procedures, normality tests were performed on pretest and posttest scores using both the Kolmogorov‑Smirnov and Shapiro-Wilk tests. The results indicated that the assumption of normality was violated for all variables. Specifically, all Shapiro-Wilk test p-values were less than .05, ranging from = .000 to p = .0009, indicating statistically significant deviations from a normal distribution. Similar findings were observed in the Kolmogorov-Smirnov test, where all variables yielded p-values below .01. This non-normality was consistent across all subdimensions, TK, TPK, TCK, AI-TPACK, and Ethics, both before and after the intervention. Notably, posttest scores for TCK (Shapiro-Wilk = .751, p < .001) and TK (Shapiro-Wilk = .872, p < .001) showed the strongest deviations from normality. Given these results, non-parametric alternatives were deemed more appropriate for the analysis.

The Wilcoxon Signed-Ranks Test was thus employed to evaluate the statistical significance of changes in AI-TPACK competence. To complement significance testing, the effect size (r) was calculated for each domain and interpreted using Cohen’s (1988) criteria, where r = 0.10 to 0.29 indicates a small effect, = 0.30 to 0.49 a medium effect, and r ≥ 0.50 a large effect. The use of rank-based methods ensured the robustness of the findings despite the non-normal distribution of the data, and the consistency of non‑normality across dimensions further justified the analytic approach.

For the qualitative data, the study employed the thematic analysis protocol outlined by Braun and Clarke (2006). The structured interview responses were analyzed using a six-phase process: familiarization with the data, generation of initial codes, searching for themes, reviewing themes, defining and naming themes, and producing the final report. This approach facilitated the systematic identification of themes that captured PSSTs’ insights into the course’s affordances and encumbrances, particularly in relation to their development of AI-integrated lesson exemplars and their engagement with instructional technologies.

5.6. Ethical Procedures

All ethical procedures in this study adhered to institutional and national standards for educational research. Prior to data collection, all participants provided informed consent after being fully briefed on the study’s objectives, procedures, and their rights, including voluntary participation and the option to withdraw without penalty. Confidentiality and anonymity were strictly maintained by collecting data without personal identifiers, storing them securely, and reporting results in aggregate form. To minimize risks, participation was separated from course grading, and only de-identified data were analyzed. Additionally, no sensitive or personally identifiable information was shared with AI platforms, and the use of AI tools during the study was confined to instructional activities. All data will be retained securely for five years before permanent deletion.

6. Results

6.1. Extent of Enhancement in Preservice Science Teachers’ Perceived AI-TPACK Competence

The following sections present the results on the extent to which participation in the AI-focused pedagogical learning course enhanced PSSTs’ perceived AI-TPACK competence.

As can be seen in Figure 3, the analysis of PSSTs’ AI-technological knowledge (TK) before and after completing the pedagogical learning course revealed substantial gains in their self-reported competencies related to AI-based tools. The overall weighted mean increased from 3.21 (SD = 0.41) in the pretest to 3.66 (SD = 0.33) in the posttest, indicating a positive shift in their perceived ability to interact with and apply AI technologies. Notably, the most significant improvements were observed in their familiarity with AI tools and their technical features (Mpre = 3.18, SD = 0.56; Mpost = 3.82, SD = 0.39), followed by enhanced knowledge of how to effectively use such tools (Mpre = 3.08, SD = 0.56; Mpost = 3.61, SD = 0.52), and increased capacity to perform specific tasks using AI (Mpre = 3.29, SD = 0.50; Mpost = 3.76, SD = 0.43). These findings suggest that the course effectively addressed key foundational aspects of AI literacy, particularly by providing exposure to diverse AI tools, modeling authentic use cases, and offering structured opportunities for skill development. Moreover, improvements in initiating AI‑driven tasks using text or voice commands (Mpre = 3.17, SD = 0.58; Mpost = 3.52, SD = 0.59) and in building confidence in using AI tools for everyday tasks (Mpre = 3.33, SD = 0.52; Mpost = 3.60, SD = 0.52) demonstrate the course’s impact in supporting both technical competence and self-efficacy. The upward trends across all items align with the goal of preparing future educators to meaningfully integrate AI in educational contexts, emphasizing the importance of foundational TK as a prerequisite for higher-order pedagogical applications.

 

Figure 3. Changes in Preservice Science Teachers’ Perceived AI-TK

The results further demonstrate substantial improvements in the PSSTs’ AI-technological pedagogical knowledge (TPK) following their participation in the pedagogical education course (Figure 4). The weighted mean score increased from 3.10 (SD = 0.43) in the pretest to 3.65 (SD = 0.33) in the posttest, indicating that the course significantly contributed to enhancing their perceived ability to pedagogically integrate AI-based tools into instruction. The highest gains were evident in their understanding of how AI tools can enhance teaching and learning (Mpre = 3.25, SD = 0.56; Mpost = 3.86, SD = 0.35), and in their ability to select tools that support student motivation (Mpre = 3.06, SD = 0.66; Mpost = 3.74, SD = 0.44), and facilitate knowledge application (Mpre = 3.15, SD = 0.59; Mpost = 3.71, SD = 0.45). These results suggest the course effectively emphasized the instructional potential of AI, particularly in fostering learner engagement and deeper cognitive processing. Increases were also noted in PSSTs’ capability to monitor student learning through AI tools (Mpre = 3.00, SD = 0.68; Mpost = 3.68, SD = 0.49), as well as their ability to interpret AI-generated feedback to inform instruction (Mpre = 3.02, SD = 0.62; Mpost = 3.51, SD = 0.53). The lowest pretest score was recorded for the use of AI-generated alerts or notifications (Mpre = 2.94, SD = 0.65), which rose to 3.43 (SD = 0.65) after the course, indicating a marked increase in awareness and use of AI-supported real-time instructional guidance. These improvements highlight the course’s success in equipping preservice teachers with the pedagogical reasoning necessary to select, evaluate, and apply AI-based tools in ways that are instructionally sound and contextually responsive. The findings suggest that intentional exposure to pedagogically relevant AI tools, combined with reflective tasks and authentic teaching scenarios, can significantly enhance preservice teachers’ TPK, which is an essential dimension in ensuring meaningful AI integration in education.

 

Figure 4. Changes in Preservice Science Teachers’ Perceived AI-TPK

Figure 5 shows marked improvements in PSSTs’ perceived ability to use AI tools to support subject‑specific teaching and learning. The weighted mean increased from 3.18 (SD = 0.49) in the pretest to 3.70 (SD = 0.38) in the posttest, indicating a substantial positive shift in their technological content integration competence. Among the four indicators, the most significant gain was recorded in the ability to effectively use subject-specific AI tools such as digital simulations or generative AI in science contexts (Mpre = 2.95, SD = 0.67; Mpost = 3.71, SD = 0.45). This notable increase reflects the impact of the course in introducing preservice teachers to discipline-relevant AI applications, highlighting the importance of contextualized tool integration in science education. In addition, PSSTs reported enhanced skills in using AI to locate relevant educational resources (Mpre = 3.35, SD = 0.67; Mpost = 3.71, SD = 0.45), increased awareness of AI tools utilized by professionals in their teaching field (Mpre = 3.15, SD = 0.61; Mpost = 3.69, SD = 0.49), and improved ability to use AI to deepen their understanding of subject content (Mpre = 3.25, SD = 0.60; Mpost = 3.69, SD = 0.47). These findings suggest that the pedagogical course successfully linked AI-based technologies to domain-specific teaching goals and content mastery. By situating AI within the disciplinary context of science, the course enabled preservice teachers to not only build their technological fluency but also align their tool use with content-specific instructional strategies. Such alignment is crucial for fostering meaningful integration of AI in science classrooms and for preparing future educators to leverage emerging technologies in ways that reinforce both conceptual understanding and professional relevance.

 

Figure 5. Changes in Preservice Science Teachers’ Perceived AI-TCK

Moreover, the pedagogical course also significantly improved PSSTs’ AI-technological pedagogical content knowledge (TPACK), as reflected in the increase in the overall weighted mean from 3.00 (SD = 0.49) in the pretest to 3.57 (SD = 0.36) in the posttest shown in Figure 6. This positive gain highlights the course’s effectiveness in developing teachers’ ability to thoughtfully integrate AI tools into subject-specific pedagogical practices. Among the seven indicators, the greatest improvement was observed in their ability to integrate AI tools into lessons using a variety of teaching strategies (Mpre = 2.97, SD = 0.66; Mpost = 3.67, SD = 0.47) and in designing lessons that combine content, pedagogy, and AI-based technologies (Mpre = 3.08, SD = 0.64; Mpost = 3.67, SD = 0.47). These results indicate that the course successfully scaffolded the development of integrated instructional design skills aligned with the principles of AI-enhanced teaching. Furthermore, PSSTs reported increased confidence in using AI to support personalized learning (Mpre = 3.15, SD = 0.57; Mpost = 3.57, SD = 0.52), provide adaptive feedback (Mpre = 3.01, SD = 0.63; Mpost = 3.46, SD = 0.50), and deliver real-time feedback during instruction (Mpre = 2.99, SD = 0.57; Mpost = 3.56, SD = 0.52). These competencies are central to dynamic, responsive classroom practices and reflect the course’s emphasis on instructional adaptability supported by AI. Notably, PSSTs also expressed increased readiness to assume a leadership role in AI integration (Mpre = 2.79, SD = 0.70; Mpost = 3.39, SD = 0.58), the lowest pretest score, which suggests a shift not only in skills but also in professional identity and agency. Lastly, their ability to select appropriate AI tools for monitoring and assessment showed meaningful improvement (Mpre = 3.00, SD = 0.56; Mpost = 3.64, SD = 0.48), pointing to growing competence in data-informed instruction. Taken together, the results demonstrate that the course effectively developed PSSTs’ TPACK by situating AI use within the nexus of content, pedagogy, and technology. The improvements in lesson design, adaptability, feedback mechanisms, and leadership orientation point to the importance of integrative and practice-oriented learning experiences in teacher education programs, especially in preparing future educators for AI‑enhanced instructional environments.

As seen in Figure 7, the pedagogical course also contributed significantly to the development of PSSTs’ ethical awareness in the context of AI integration in education. The weighted mean for the Ethics dimension increased from 3.03 (SD = 0.49) in the pretest to 3.54 (SD = 0.43) in the posttest, suggesting a substantial improvement in PSSTs’ ability to critically evaluate AI tools through an ethical lens. The greatest improvement was observed in the PSSTs’ awareness of assessing the fairness of AI tools toward all students (Mpre = 3.05, SD = 0.67; Mpost = 3.64, SD = 0.48), which affirm the course’s emphasis on equity and justice in AI-supported learning environments. Similar gains were noted in PSSTs’ understanding of how AI tools account for individual learner differences such as race, gender, or learning needs (Mpre = 3.05, SD = 0.67; Mpost = 3.51, SD = 0.59), and in their ability to comprehend the reasoning behind AI-generated decisions (Mpre = 3.01, SD = 0.59; Mpost = 3.49, SD = 0.55). These findings indicate that the course successfully introduced critical perspectives on the interpretability and transparency of AI systems, helping future educators consider not only what AI tools can do but also how and why they produce specific outputs. Furthermore, the increase in teachers’ ability to identify developers or responsible entities behind AI tool design (Mpre = 3.02, SD = 0.66; Mpost = 3.51, SD = 0.53) reflects an important step toward cultivating digital accountability and informed decision-making in educational technology adoption. All in all, the improvement across all ethical dimensions suggests that the course encouraged reflective and socially responsible use of AI in education. These outcomes are vital in shaping ethically grounded educators who can champion inclusive, fair, and transparent AI practices within their future classrooms. Embedding these ethical competencies in teacher preparation programs is essential, particularly as AI becomes increasingly pervasive in educational settings.

 

Figure 6. Changes in Preservice Science Teachers’ Perceived AI-TPACK

 

Figure 7. Changes in Preservice Science Teachers’ Perceived AI-Ethics

Figure 8 shows that PSSTs’ perceived overall AI-TPACK competence improved across all dimensions from pretest to posttest. Notable increases were observed in AI-TK, AI-TPK, AI-TCK, AI-TPCK, and AI-Ethics, with the overall mean rising from 3.12 to 3.62, indicating an upward trend in their competence following the course.

 

Figure 8. Pretest and posttest mean scores of preservice science teachers’ AI-TPACK competence across dimensions

To determine the statistical significance of the observed improvements in PSSTs’ AI-TPACK competence, a Wilcoxon Signed-Ranks Test was conducted across the five dimensions and the overall AI‑TPACK composite score (Table 2).

Ranks

N

Mean Rank

Sum of Ranks

z

Asymp. Sig. (2-tailed)

Effect size (r )

AI-Technological Knowledge (TK)

Negative Ranks

15

20.40

306.00

-6.178

.000*

0.7

(large)

Positive Ranks

63

44.05

2775.00

AI-Technological Pedagogical Knowledge (TPK)

Negative Ranks

9

17.56

158.00

-6.846

.000*

0.78

(large)

Positive Ranks

68

41.84

2845.00

AI-Technological Content Knowledge (TCK)

Negative Ranks

13

20.27

263.50

-5.848

.000*

0.69

(large)

Positive Ranks

58

39.53

2292.50

AI-Technological Pedagogical Content Knowledge (TPACK)

Negative Ranks

13

17.92

233.00

-6.380

.000*

0.73

(large)

Positive Ranks

63

42.75

2693.00

AI-Ethics

Negative Ranks

11

24.23

266.50

-5.965

.000*

0.70

(large)

Positive Ranks

62

39.27

2434.50

Overall AI-TPACK

Negative Ranks

13

17.19

223.50

-6.900

.000*

0.76

(large)

Positive Ranks

70

46.61

3262.50

Note: p-values: Statistical significance is indicated as *p < 0.05. Effect Size (r): Interpreted as follows: 0.10 to 0.29 - Small effect size; 0.30 to 0.49 - Medium effect size; 0.50 and above - Large effect size

Table 2. Wilcoxon Signed-Ranks Test for Differences in PSSTs’ Perceptions
of AI-TPACK Before and After the Course

As can be gleaned in Table 2, results revealed statistically significant increases in all dimensions at the < .001 level. Notably, the effect sizes (r) for all dimensions exceeded the 0.50 threshold, indicating large effects according to Cohen’s (1988) benchmarks. These findings suggest that the pedagogical learning course had a strong and meaningful impact on enhancing PSSTs’ self-reported competence in integrating AI in science teaching. More specially, the highest effect size was observed in the TPK domain (r = .78, = -6.846, p < .001), suggesting that the course was particularly effective in enabling PSSTs’ to understand and apply AI-based tools for pedagogical purposes such as monitoring student learning, evaluating feedback, and selecting tools to support knowledge application. This was closely followed by strong effects in the overall composite score (r = .76, z = -6.900, p < .001) and AI-TPACK domain (r = .73, z = -6.380, p < .001), indicating a robust improvement in the ability to design and implement instruction that integrates content, pedagogy, and AI technologies. Other dimensions also demonstrated substantial gains, including Technological Knowledge (TK) (r = .70), Ethics (r = .70), and Technological Content Knowledge (TCK) (r = .69), all showing large and statistically significant effects.

These results affirm that the course not only strengthened technical and pedagogical capacities but also effectively cultivated critical awareness of the ethical implications of AI use in education. The relatively high rank sums among positive ranks across all dimensions reflect consistent and meaningful gains among the majority of PSSTs. Overall, the combination of statistical significance and large effect sizes across all dimensions suggests that the course achieved its intended outcomes in developing AI-TPACK competence. It reinforces the importance of structured, integrative, and ethically grounded approaches in teacher education programs that aim to prepare educators for the demands of AI-integrated learning environments.

6.2. Affordances and Encumbrances Encountered by Preservice Science Teachers

While the quantitative analysis demonstrated significant improvements in PSSTs’ AI-TPACK competence, the qualitative strand provides essential context by highlighting how participants experienced the intervention and why certain dimensions were strengthened. The interview data directly captured PSSTs’ reflections on the affordances and challenges encountered during the course, particularly in relation to lesson exemplar development and the ethical integration of AI tools. These insights complement the statistical findings by offering explanatory depth that cannot be derived from numerical data alone, thereby enriching the overall interpretation of the study’s outcomes. In the following sections, PSSTs’ reflections are presented, focusing on the affordances that supported their learning and the encumbrances that posed challenges during the implementation of the AI-focused pedagogical learning course.

6.2.1. Preservice Science Teachers’ Reflections on the Affordances of the Course

As presented in Table 3, thematic analysis of PSSTs’ responses revealed three overarching themes: (1) Improved Competence in AI Integration, (2) Enhanced Pedagogical Application of AI, and (3) Confidence and Readiness for Future Teaching. These themes reflect the PSSTs’ perceived development in integrating AI meaningfully into their instructional practices, as well as their evolving beliefs and professional identity as future science educators.

Theme

Categories

Sample Codes

Improved Competence in AI Integration

AI-supported lesson planning

Instructional design

Tool selection

AI tools

Lesson exemplar

Instructional integration

Technology-enhanced planning

Enhanced Pedagogical Application of AI

Application of AI in teaching strategies

Assessment

Learner-centered practices

Teaching strategies

Differentiated instruction

Student engagement

Personalized learning

Confidence and Readiness for Future Teaching

Increased confidence

Reduced apprehension

Preparedness to use AI in authentic settings

More confident

Readiness to teach

Self-efficacy

Real-world classroom application

Table 3. Thematic Analysis of the Structured Interview Responses

Theme 1. Improved Competence in AI Integration

PSSTs consistently reported gains in their ability to integrate AI tools into lesson planning and instructional design. Many acknowledged that their understanding of how to strategically utilize AI in teaching significantly evolved over the course. Prior to the intervention, several PSSTs admitted to having limited exposure to AI applications in education. By the end of the course, however, they described themselves as more equipped to select and apply AI tools to support content delivery.

One PSST stated:

“Yes, my AI-TPACK competency has significantly improved because the course helped me integrate AI tools effectively into lesson planning and instructional design.”

Others emphasized how their conceptual understanding of AI shifted into practical, instructional competence:

“Before, my knowledge about using technology in teaching was very basic. But through this course, I learned that there are more ways to use AI to support teaching. This course opened my eyes to tools like simulations, apps, and presentation platforms.”

The ability to integrate AI meaningfully into lesson exemplars, rather than using it superficially, emerged as a critical insight:

“It’s not just about using AI—it’s about knowing what tool to use, when to use it, and how it fits with your learning objectives.”

Theme 2. Enhanced Pedagogical Application of AI

Beyond technological proficiency, PSSTs expressed growth in aligning AI tools with sound pedagogical strategies. They described how the course helped them understand the affordances of AI in supporting inquiry-based learning, formative assessment, differentiation, and student engagement. AI was no longer seen merely as an add-on, but as an integral part of instructional planning.

As one PSST noted:

“This course taught us that AI isn’t just for automation—it’s a powerful tool to personalize learning, engage students, and improve assessment practices.”

PSSTs also referenced the importance of AI in crafting responsive and inclusive learning environments:

“I have discovered various AI tools that can make lesson planning, assessment, and student engagement more efficient. These tools can support differentiated instruction, making it easier to meet the diverse needs of my students.”

Another PSST shared how AI integration elevated their understanding of pedagogy:

“I gained a better understanding that using AI is not just about technology, it is also a broad strategy that can be used in teaching.”

Theme 3. Confidence and Readiness for Future Teaching

The course also contributed significantly to PSSTs’ confidence in using AI tools and applying AI-TPACK in actual teaching contexts. Many reported feeling more empowered and less apprehensive about integrating technology in the classroom. Others shared how the course addressed their initial uncertainties and built their professional identity as adaptive, future-ready educators.

One PSST reflected:

“Before, I was ashamed and scared to let others know I use AI. But now, I’ve learned how to integrate AI seamlessly into my lesson plans. I even use this knowledge in my part-time tutoring work to create individualized learning materials.”

Several others highlighted how the structured opportunities to present and critique lesson exemplars strengthened their instructional confidence:

“By being able to develop lessons integrating AI, as well as listening and critiquing my peers’ works, I now feel equipped with the knowledge of not only what AI tools to use, but when exactly to use them.”

Finally, others acknowledged that while their growth was evident, they also recognized the need for continued development:

“My AI-TPACK competency has definitely grown, but it’s an ongoing journey. The course gave me a strong foundation, but AI is rapidly evolving, and we need to keep learning.”

In all, these themes reflect a trajectory of transformation among PSSTs from limited understanding of AI to confident, reflective, and pedagogically sound integration of AI tools into their instructional practices. The PSSTs’ reflections affirm the impact of the AI-focused pedagogical learning course in developing competencies aligned with the AI-TPACK framework and responsive to the demands of 21st-century education.

6.2.2. Preservice Science Teachers’ Reflections on the Encumbrances Encountered in the Course

When it comes to the challenges encountered in the course, three overarching themes emerged: (1) Instructional Alignment and Pedagogical Fit, (2) Access, Usability, and Technical Barriers, and (3) Navigating AI Tool Selection and Integration. These themes highlight the complexities that PSSTs faced in bridging technological tools with pedagogical intentions, logistical realities, and instructional design requirements (Table 4).

Theme

Categories

Codes

Instructional Alignment and Pedagogical Fit

Challenges in aligning AI tools with curriculum standards, learning objectives, and appropriate pedagogical approaches

Alignment with objectives

Pedagogical suitability

Curriculum fit

Lesson coherence

Access, Usability, and Technical Barriers

Difficulties related to limited device access, internet connectivity, subscription-based platforms, and unfamiliar interfaces

Subscription barriers

Low connectivity

Device limitations

Platform usability

Navigating AI Tool Selection and Integration

Overwhelm due to the large number of tools, uncertainty about which to use, lack of prior experience, and group coordination issues

Too many tools

Tool appropriateness

Learning curve

Low confidence

Group dynamics

Table 4. Thematic Analysis of the Structured Interview Responses

Theme 1. Instructional Alignment and Pedagogical Fit

A prevailing challenge involved aligning AI tools with lesson objectives, content standards, and appropriate pedagogical strategies. Many PSSTs expressed difficulty determining whether a given AI tool matched the learning competencies they were targeting. This misalignment often required critical review, revision, and additional research to ensure the tool enhanced, rather than detracted from, instructional integrity.

One PSST explained:

“It was hard to find a specific AI tool because the topic we chose was broad. We had to keep checking if it aligned with our objectives and whether it actually supported student learning.”

Another shared:

“The challenge that I encountered was where should I put the AI tools in my exemplar and what specific AI tools I will be using that is free for all and accessible. It also had to match our intended learning outcomes.”

PSSTs also described concerns about AI tools potentially overshadowing pedagogical goals:

“Sometimes, AI gives output that’s too generic or not quite aligned with what we’re teaching. We had to rewrite, adjust, and always double-check with the curriculum guide.”

To address these issues, PSSTs emphasized the importance of careful alignment, peer feedback, and iterative revision of their lesson designs.

Theme 2. Access, Usability, and Technical Barriers

Another significant theme concerned practical and technical challenges, including device limitations, lack of internet access, unfamiliarity with platforms, and the restrictive nature of paid tools. These barriers occasionally hindered tool exploration and integration during lesson exemplar development.

As one PSST recounted:

“Lack of gadgets to be used. I had to go to internet cafes just so I could use computers or PCs.”

Another PSST noted:

“Some of the AI tools are not that flexible. Commanding them is too difficult, and they sometimes give broad or irrelevant results.”

Subscription models also posed challenges:

“There are some good tools that need a subscription for us to use better, and it’s a hindrance, especially for those who don’t have access to paid platforms.”

To overcome these obstacles, PSSTs described adopting alternative tools, sharing resources with peers, and seeking technical support from instructors and tutorials.

Theme 3. Navigating AI Tool Selection and Integration

Many PSSTs highlighted the overwhelming number of AI tools available and their limited prior experience in evaluating and applying them effectively. This theme encompassed not only tool selection but also confidence issues and challenges in collaborative decision-making.

A PSST described the initial confusion:

“There are so many AI tools nowadays. The question is which ones will really help students understand the topic. It was overwhelming to choose and use them appropriately.”

Another added:

“At first, it was confusing to pick the right AI tools for specific tasks. I wasn’t sure how to use them in a meaningful way. I overcame this by asking for help, watching tutorials, and trying things out until I got the hang of it.”

Group coordination also emerged as a challenge in integrating tools:

“As someone who didn’t put a generative AI in the lesson exemplar, it was hard to find a beneficial and perfect AI. Maybe because my groupmates were not helping enough when we were making it.”

To navigate these challenges, PSSTs described using trial-and-error, collaborative brainstorming, and reviewing sample lesson exemplars. Over time, these strategies enhanced their understanding of how to integrate tools meaningfully and confidently. Collectively, these findings illuminate the need for structured guidance, accessible resources, and scaffolded opportunities to explore AI-based tools during teacher preparation programs. Addressing these challenges can help PSSTs move beyond basic tool familiarity to intentional, pedagogically sound AI integration in science instruction.

7. Discussion

The results of the present study demonstrated statistically significant improvements in PSSTs’ AI-TPACK competence following participation in the AI-focused pedagogical learning course. This finding indicates that the intervention successfully provided both the conceptual grounding and practical scaffolding necessary for PSSTs to integrate AI meaningfully into science education. The large overall effect size suggests that the gains were not only statistically reliable but also educationally substantial, underscoring the transformative potential of structured, short-term interventions in teacher education. These results resonate with earlier research on TPACK development in non-AI contexts, which similarly found that design-based pedagogical interventions can significantly enhance preservice teachers’ capacity for technology integration (Aktaş & Özmen, 2020; Antonio & Prudente, 2025; Kafyulilo, Fisser & Voogt, 2012; Kartal & Dilek, 2021). At the same time, the variability of effect sizes across dimensions suggests that while preservice teachers made strong progress in technological and pedagogical integration, more sustained support may be required to consolidate gains in ethical decision-making and content-specific applications of AI.

The PSSTs’ AI-TPACK growth can be attributed to several features embedded in the course design. The course grounded its learning progression on the TPACK framework, which provided a conceptual structure for understanding the interplay among content, pedagogy, and technology (Mishra & Koehler, 2006). Integrating the SAMR model further guided learners on how AI tools can evolve from substitution to redefinition of learning tasks (Puentedura, 2014). Participants’ reflections reveal that these frameworks helped them transcend basic tool use and instead engage in transformative pedagogical design. In addition, the course employed experiential and inquiry-based learning strategies, notably through the creation and critique of AI-integrated lesson exemplars. This culminating project served not only as a performance task but as a space for PSSTs to synthesize their understanding and apply it in authentic contexts. As reported by PSSTs, this hands-on design helped them to evaluate and select appropriate tools, align them with learning outcomes, and adapt instruction to diverse learners. These findings support prior studies emphasizing that design-based, task-oriented activities enhance teachers’ TPACK development and foster higher-order integration skills (Baran & Uygun, 2016; Njiku, Mutarutinya & Maniraho, 2021; Oktaviani & Utami, 2024).

The hands-on exploration of AI tools also served as a key factor. PSSTs’ responses consistently pointed to the value of actively experimenting with and evaluating AI applications, rather than passively learning about them. During the workshops that transpired in the course, PSSTs manipulated various tools such as Flexi, ChatGPT, PhET, or CK-12, enabling them to understand not just the functionalities, but also their pedagogical affordances. The collaborative structure of the course further amplified its impact. Group work, peer presentations, and the open sharing of tools and exemplar designs facilitated social learning, where PSSTs learned vicariously from each other’s choices and feedback. According to the Community of Inquiry (CoI) model (Garrison, Anderson & Archer, 2000), such interaction builds cognitive presence and strengthens conceptual understanding through collective meaning-making. This blend of exploration, collaboration, and reflection aligns with the principles of constructivist and sociocultural learning theories, where learning is enhanced through meaningful activity, dialogical interaction, and contextual application (Vygotsky, 1978; Lave & Wenger, 1991).

Thematic analysis of post-course interviews revealed further insights into the course’s affordances and encumbrances, enriching the interpretation of quantitative gains. Three major affordances stood out. First, the authenticity of the culminating task i.e., developing AI-integrated lesson exemplars was frequently cited as the most impactful. This activity allowed PSSTs to make real-world instructional decisions regarding AI integration, content alignment, and student needs. This reinforces the importance of authentic assessment in teacher education (Darling-Hammond, 2006). Second, the value of peer modeling and critique emerged as a powerful learning tool. Observing classmates’ presentations, tools, and design choices created a dynamic exchange of perspectives that stimulated critical reflection. This echoes findings from collaborative learning research, where shared task environments enable deeper understanding and increased self-efficacy (Gillies, 2016). Lastly, the course raised PSSTs’ awareness of ethical considerations, particularly the importance of responsible AI use in teaching. Several students mentioned learning that AI tools should be leveraged with intentionality and moderation, maintaining the teacher’s role as facilitator. These insights highlight the Ethics domain of the AI-TPACK framework, which is crucial in shaping informed, reflective practitioners amid rapid technological change (Celik, 2023).

Despite the notable successes of the course, several challenges emerged that highlight important considerations for future implementation. One recurrent issue was the experience of tool overload and cognitive fatigue among PSSTs, many of whom felt overwhelmed by the sheer volume of AI tools introduced within a relatively short timeframe. This aligns with Sweller’s (2011) Cognitive Load Theory, which posits that excessive or poorly structured information can impede learning by overburdening the working memory. Without sufficient scaffolding or time for deep engagement, the rapid introduction of diverse tools may compromise meaningful integration. In addition to cognitive strain, technical limitations posed significant barriers to equitable participation. Issues such as unreliable internet connectivity, limited access to appropriate devices, and dependence on subscription-based AI platforms were frequently cited by PSSTs. These infrastructural constraints reflect broader digital inequalities that can hinder inclusive and sustainable AI integration in classrooms (Widyasari, Murtiyasa & Supriyanto, 2024). Furthermore, the inherent complexity of many AI tools, often designed without educators in mind, stresses the need for targeted training and support systems to enable effective use (Kizilcec, 2024).

Another key challenge relates to pedagogical alignment. PSSTs reported difficulty identifying AI tools that genuinely complemented their instructional goals, resulting in superficial or ineffective applications. As Crompton, Jones and Burke (2022) emphasize, the educational value of AI depends not merely on technological sophistication but on how well it aligns with established pedagogical practices and learning outcomes. When misaligned, AI tools can disrupt instructional coherence and diminish the intended learning experience. These encumbrances highlight the importance of thoughtful instructional design, guided selection of tools, and contextual sensitivity in preparing teachers for AI-enhanced education.

In all, the findings of this study affirm the transformative potential of a well-designed, contextually grounded pedagogical course in developing AI-TPACK competence among PSSTs. By strategically integrating theoretical models, experiential learning, collaborative design, and reflective practice, the course not only enhanced PSSTs’ conceptual and practical understanding of AI integration but also fostered critical awareness of ethical and contextual considerations. At the same time, the challenges encountered, ranging from cognitive overload and infrastructural limitations to issues of pedagogical misalignment, highlight the nuanced realities of preparing future educators for AI-enriched learning environments. These insights signify the need for teacher education programs to adopt intentional, scaffolded, and equity-driven approaches to AI integration, ensuring that preservice teachers are not merely tool users but thoughtful designers of AI-supported instruction. As AI continues to reshape the educational landscape, empowering teachers with both competence and discernment remains a crucial imperative.

8. Conclusion

This study provides compelling evidence that a strategically designed AI-focused pedagogical learning course can significantly enhance PSSTs’ competence in integrating AI into science instruction. Grounded in integrative frameworks such as TPACK, SAMR, Universal Design for Learning (UDL), and Inquiry‑Based Learning (IBL), the course successfully bridged technological innovation with pedagogical integrity, preparing future-ready educators capable of designing adaptive, reflective, and ethically informed instruction in AI-enhanced classrooms. These findings underscore the transformative potential of well‑structured teacher education interventions in advancing digital pedagogy and shaping science education in the era of AI.

More specifically, participation in the AI-focused pedagogical learning course led to statistically significant improvements across all dimensions of AI-TPACK competence — technological knowledge, pedagogical knowledge, content-specific applications, integrated instructional design, and ethical awareness. The large effect sizes observed indicate that immersive, scaffolded, and collaborative learning experiences can substantially improve preservice teachers’ confidence and capability to integrate AI meaningfully into teaching. Through activities such as AI tool exploration, collaborative lesson design, and exemplar development, PSSTs were able to connect theory with practice and develop the necessary knowledge and skills to design technology-enhanced, inquiry-oriented instruction.

Moreover, the qualitative findings revealed that the course not only enhanced competence but also reshaped PSSTs’ perspectives on the pedagogical role of AI. Participants highlighted key affordances such as improved instructional design, deeper understanding of AI’s pedagogical value, and heightened confidence and readiness to teach in technology-rich environments. At the same time, they identified encumbrances including tool selection challenges, technical and access barriers, cognitive overload, and difficulties in aligning AI tools with curriculum goals. These findings emphasize the need for continued institutional support, targeted training, and scaffolded integration strategies to help future teachers navigate the complexities of AI use in science education.

In sum, this study concludes that effective preparation for AI-enhanced science teaching requires more than technical proficiency — it demands pedagogically grounded, ethically responsible, and contextually responsive approaches that empower future educators to harness AI for deeper inquiry, adaptive instruction, and transformative learning experiences.

9. Recommendations

To strengthen the integration of AI in science teacher education, this study recommends several action points for practice, policy, and future research. From a practical standpoint, teacher education institutions should embed AI integration into core pedagogical courses, ensuring that preservice teachers experience hands-on activities such as AI tool exploration, collaborative lesson planning, and exemplar development. These immersive experiences, anchored in frameworks like TPACK and SAMR, effectively enhance preservice teachers’ competence and confidence in using AI for instructional purposes.

Teacher development programs should also prioritize capacity building in AI pedagogies to model effective integration practices. Teacher education institutions may design and implement comprehensive teacher-training programs for in-service science teachers on technology integration, with a particular emphasis on the purposeful use of AI applications, inquiry-based learning approaches, and metacognitive strategies. By embedding hands-on workshops, collaborative lesson design, exemplar development, and reflective practice into the training, such programs can empower in-service teachers to translate theoretical knowledge into classroom-ready pedagogical innovations. This approach will not only strengthen teachers’ confidence and competence in AI-enhanced instruction but also foster a sustained culture of innovation and reflective practice in science classrooms.

At the policy level, there is a pressing need for national education agencies to revise teacher education standards and curricula to reflect the growing importance of AI-TPACK competence. Policymakers are encouraged to formulate guidelines that promote the ethical, equitable, and contextually relevant integration of AI in teacher preparation. Institutional policies should support interdisciplinary collaboration between education and technology fields to design forward-looking, AI-enriched teacher training programs.

For future research, longitudinal studies are needed to track the sustained impact of AI-focused pedagogical training on teaching practices during student teaching and into professional practice. Further inquiries should also investigate how AI-TPACK competence translates into learner outcomes and engagement in diverse science education contexts. Additionally, research could explore scalable and culturally responsive models of AI integration in teacher education, along with critical examinations of digital equity, ethical use, and teacher agency in AI-supported learning environments.

10. Limitations

While this study offers valuable insights into the development of AI-TPACK competence among PSSTs, several limitations must be noted. First, the use of a one-group pretest–posttest design without a control group restricts causal inference, as improvements may also reflect other factors such as coursework or external exposure to AI (Cook, Campbell & Shadish, 2002). Future research should employ quasi‑experimental or longitudinal designs to strengthen causal claims and assess sustainability. Second, the sample, drawn from a single state university in Central Luzon and limited to BSEd Science majors, constrains generalizability. Broader studies across diverse institutions and disciplines are recommended. Third, the reliance on self-reported perceptions may have introduced bias and may not fully reflect classroom practices; future studies could integrate observations or performance-based assessments. Finally, the 14-week duration was insufficient to capture long-term retention and application, underscoring the need for follow-up studies tracking preservice teachers into practicum or early teaching years.

Declaration of Conflicting Interests

The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The author received no financial support for the research, authorship, and/or publication of this article.

References

Aga, Z.G., Sawyer, A.G., & Wolfe, M. (2024). More confidence more critical: Investigating secondary mathematics preservice teachers’ use of an Artificial Intelligence chatbot as a curriculum development tool. American Journal of Creative Education, 7(2), 63-78. https://doi.org/10.55284/ajce.v7i2.1278

Aktaş, İ., & Özmen, H. (2020). Investigating the impact of TPACK development course on pre-service science teachers’ performances. Asia Pacific Education Review, 21(4), 667-682. https://doi.org/10.1007/s12564-020-09653-x

Alayyar, G.M. (2011). Developing pre-service teacher competencies for ICT integration through design teams. Doctoral dissertation. University of Twente, Enschede, The Netherlands. https://doi.org/10.3990/1.9789036532341

Almasri, F. (2024). Exploring the impact of artificial intelligence in teaching and learning of science: A systematic review of empirical research. Research in Science Education, 54(5), 977-997. https://doi.org/10.1007/s11165-024-10176-3

Antonio, R. P., & Prudente, M. S. (2025). Cultivating Preservice Science Teachers’ TPACK and Self-Efficacy Beliefs through a Pedagogical Learning Course on Technology-Integrated Metacognitive Argument-Driven Inquiry. Journal of Science Education and Technology, 1-25.
https://doi.org/10.1007/s10956-025-10237-w

Antonio, R.P., & Sison, L.R.C. (2026). Can artificial intelligence (Al) shape the future of teacher education? Drawing evidence-based insights for teacher preparation. International Journal of Instruction, 19(1), 257-280.

Antonio, R.P. (2025). Promoting technological pedagogical content knowledge (TPACK) in preservice science teacher education: A scoping review of instructional strategies, interventions, and programs. International Journal on Studies in Education (IJonSE), 7(1), 157-171. https://doi.org/10.46328/ijonse.302

Attach, A.I.G., Etim, N.M., Inyang, S.I., Oguzie, B.A., & Ekong, J.S. (2024). Shaping the future of AI in education: Analyzing key influencers on Romanian teacher trainees’ willingness to integrate AI. The American Journal of Social Science and Education Innovations, 6(09), 158-173. https://doi.org/10.37547/tajssei/Volume06Issue09-17

Ayanwale, M.A., Adelana, O.P., Molefi, R.R., Adeeko, O., & Ishola, A.M. (2024). Examining artificial intelligence literacy among pre-service teachers for future classrooms. Computers and education open, 6, 100179. https://doi.org/10.1016/j.caeo.2024.100179

Baran, E., & Uygun, E. (2016). Putting technological, pedagogical, and content knowledge (TPACK) in action: An integrated TPACK-design-based learning (DBL) approach. Australasian Journal of Educational Technology, 32(2). https://doi.org/10.14742/ajet.2551

Black, N.B., George, S., Eguchi, A., Dempsey, J.C., Langran, E., Fraga, L. et al. (2024). A framework for approaching AI education in educator preparation programs. In Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23069-23077. https://doi.org/10.1609/aaai.v38i21.30351

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative research in psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa

Celik, I. (2023). Towards Intelligent-TPACK: An empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Computers in human behavior, 138, 107468. https://doi.org/10.1016/j.chb.2022.107468

Chiu, T.K.F., Chai, C.S., Liang, J.C., & Tsai, C.C. (2022). Artificial intelligence in education: A review of empirical research from 2011 to 2020. Computers and Education: Artificial Intelligence, 3, 100052. https://doi.org/10.1016/j.caeai.2022.100052

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum Associates.

Cohen, L., Manion, L., & Morrison, K. (2018). Research methods in education (8th ed.). Abingdon, Oxon. https://doi.org/10.4324/9781315456539

Cook, T.D., Campbell, D.T., & Shadish, W. (2002). Experimental and quasi-experimental designs for generalized causal inference (1195). Boston, MA: Houghton Mifflin.

Creswell, J.W., & Creswell, J.D. (2017). Research design: Qualitative, quantitative, and mixed methods approaches. Sage Publications.

Crompton, H., Jones, M.V., & Burke, D. (2022). Affordances and challenges of artificial intelligence in K-12 education: A systematic review. Journal of Research on Technology in Education, 56(3), 248-268. https://doi.org/10.1080/15391523.2022.2121344

Darling-Hammond, L. (2006). Constructing 21st-century teacher education. Journal of Teacher Education, 57(3), 300-314. https://doi.org/10.1177/0022487105285962

Emerson, R.W. (2016). Measuring change: pitfalls in research design. Journal of Visual Impairment & Blindness, 110(4), 288-290. https://doi.org/10.1177/0145482X1611000412

Estrellado, C.J., & Miranda, J.C. (2023). Artificial intelligence in the Philippine educational context: Circumspection and future inquiries. International Journal of Scientific and Research Publications, 13(5), 16-22. https://doi.org/10.29322/IJSRP.13.05.2023.p13704

Floridi, L., & Cowls, J. (2022). A unified framework of five principles for AI in society. Machine learning and the city: Applications in architecture and urban design (535-545). John Wiley & Sons

Garrison, D.R., Anderson, T., & Archer, W. (2000), Critical inquiry in a text-based environment: Computer conferencing in higher education model, The Internet and Higher Education, 2(2-3), 87-105. https://doi.org/10.1016/S1096-7516(00)00016-6

Gatlin, M. (2023). Assessing pre-service teachers’ attitudes and perceptions of using artificial intelligence in the classroom. Texas Educator Preparation, 7(2), 1-8. https://doi.org/10.59719/txep.v7i2.35

Gillies, R.M. (2016). Cooperative learning: Review of research and practice. Australian Journal of Teacher Education (Online), 41(3), 39-54. https://doi.org/10.14221/ajte.2016v41n3.3

Harakchiyska, T., & Vassilev, T. (2024). Pre-service teachers’perceptions of ai and its implementation in the foreign (english) language classroom. Strategies for Policy in Science & Education/Strategii na Obrazovatelnata i Nauchnata Politika, 32(5), 218-232. https://doi.org/10.53656/str2024-5s-22-pre

Holmes, W., Bialik, M., & Fadel, C. (2022). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.

Kafyulilo, A., Fisser, P. & Voogt, J. (2012). Transforming Classroom Practices through Teachers’ Learning of TPACK: The Case of In-service Teachers at Kibasila Secondary School in Tanzania. In P. Resta (Ed.), Proceedings of SITE 2012--Society for Information Technology & Teacher Education International Conference (pp. 2861-2869). Austin, Texas, USA: Association for the Advancement of Computing in Education (AACE). Retrieved November 8, 2025 from https://www.learntechlib.org/p/40023.

Kartal, T., & Dilek, I. (2021). Preservice science teachers’ TPACK development in a technology-enhanced science teaching method course. Journal of Education in Science Environment and Health, 7(4), 339-353. https://doi.org/10.21891/ jeseh.994458

Kizilcec, R.F. (2024). To advance AI use in education, focus on understanding educators. International Journal of Artificial Intelligence in Education, 34(1), 12-19. https://doi.org/10.1007/s40593-023-00351-4

Kuo-Hsun-Ma, J. (2007). Digital Divide, Global. The Blackwell Encyclopedia of Sociology, 1-6. https://doi.org/10.1002/9781405165518.wbeos0563.pub2

Lacuna, J.R. (2025). Exploring the Readiness of Pre-Service Teachers for AI Integration in Philippine Education. International Journal of Research and Innovation in Social Science, 9(3), 4907-4924. https://doi.org/10.47772/IJRISS.2025.90300392

Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press. https://doi.org/10.1017/CBO9780511815355

Lepage, A., & Collin, S. (2024). Preserving Teacher and Student Agency: Insights from a Literature Review. Creative Applications of Artificial Intelligence in Education (17-34). Palgrave Macmillan. https://doi.org/10.1007/978-3-031-55272-4_2

Luckin, R., Holmes, W., Griffiths, M., & Forcier, L.B. (2016). Intelligence unleashed: An argument for AI in education. Pearson Education.

Mafarja, N., Zulnaidi, H., & Mohamad, M.M. (2025). Developing Critical Thinking Skills Through AI‑Enhanced Science Education for Sustainability. In Rethinking the Pedagogy of Sustainable Development in the AI Era (173-196). IGI Global Scientific Publishing. https://doi.org/10.4018/979-8-3693-9062-7.ch009

Meylani, R. (2024). Artificial Intelligence in the Education of Teachers: A Qualitative Synthesis of the Cutting-Edge Research Literature. Journal of Computer and Education Research, 12(24), 600-637. https://doi.org/10.18009/jcer.1477709

Mishra, P., & Koehler, M.J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record: The Voice of Scholarship in Education, 108(6), 1017-1054. https://doi.org/10.1111/j.1467-9620.2006.00684.x

Njiku, J., Mutarutinya, V., & Maniraho, J.F. (2021). Building Mathematics Teachers’ TPACK through Collaborative Lesson Design Activities. Contemporary Educational Technology, 13(2), ep297. https://doi.org/10.30935/cedtech/9686

Nuangchalerm, P., Prachagool, V., Saregar, A., & Yunus, Y.M. (2024). Fostering Pre-Service Teachers’ AI Literacy through School Implications. Journal of Philology and Educational Sciences, 3(2), 77-86. https://doi.org/10.53898/jpes2024327

O’Neill, S. (2025). AI-Enabled Science Lesson Development for Pre-Service and Novice Teachers. In Emerging Technologies Transforming Higher Education: Instructional Design and Student Success (125-144). IGI Global. https://doi.org/10.53898/jpes2024327/10.4018/979-8-3693-3904-6.ch006

Oktaviani, H.I., & Utami, D.D. (2024). Development Of Training Programs To Enhance Teachers’ Digital Skills With Technological Pedagogical Content Knowledge (TPACK). Journal of Educational Technology Studies and Applied Research, 1(2). https://doi.org/10.70125/jetsar.v1i2y2024a23

Özer-Altınkaya, Z., & Yetkin, R. (2025). Exploring pre-service English language teachers’ readiness for AI-integrated language instruction. Pedagogies: An International Journal, 1-17. https://doi.org/10.1080/1554480X.2025.2451299

Park, J. (2024). A case study on enhancing the expertise of artificial intelligence education for pre-service teachers. Preprints. https://doi.org/10.20944/preprints202305.2006.v1

Prajapati, M.A.B. (2024). Artificially intelligent in education: “Redefining learning in the 21st century”. Educational Resurgence Journal.

Puentedura, R.R. (2014). SAMR: A contextualized introduction.

Sawyer, A.G. (2024). Artificial intelligence chatbot as a mathematics curriculum developer: Discovering preservice teachers’ overconfidence in ChatGPT. International Journal on Responsibility, 7(1), 1. https://doi.org/10.62365/2576-0955.1106

Schopp, K., Schelenz, L., Heesen, J., & Pawelec, M. (2019). Ethical questions of digitalization in the global south: perspectives on justice and equality. TATuP-Zeitschrift Für Technikfolgenabschätzung in Theorie Und Praxis, 28(2), 11. https://doi.org/10.14512/tatup.28.2.s11

Singh, G., & Thakur, A. (2024). AI in Education: Ethical Challenges and Opportunities. The Ethical Frontier of AI and Data Analysis (18-38). IGI Global. https://doi.org/10.4018/979-8-3693-2964-1.ch002

Suna, G., Suchismita, S., & Das, T. (2025). Integrating artificial intelligence in teacher education: A systematic analysis. International Journal of Current Science Research and Review, 8(1), 33. https://doi.org/10.47191/ijcsrr/V8-i1-33

Sweller, J. (2011). Cognitive load theory. Psychology of Learning and Motivation, 55, 37-76. https://doi.org/10.1016/B978-0-12-387691-1.00002-8

UNESCO (2021). AI and education: Guidance for policymakers. United Nations Educational, Scientific and Cultural Organization. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000376709

Vygotsky, L. (1978). Interaction between learning and development. Readings on the Development of Children, 23(3), 34-41.

Wang, D., & Huang, X. (2025). Transforming education through artificial intelligence and immersive technologies: enhancing learning experiences. Interactive Learning Environments, 33(7), 4546-4565. https://doi.org/10.1080/10494820.2025.2465451

Weiner, S., Lake, R., & Rosner, J. (2024). AI Is Evolving, but Teacher Prep Is Lagging: A First Look at Teacher Preparation Program Responses to AI. Center on Reinventing Public Education.

Widyasari, E., Murtiyasa, B., & Supriyanto, E. (2024). Revolusi Pendidikan dengan Artificial Intelligence: Peluang dan Tantangan. Jurnal Ilmiah Edukatif, 10(2), 302-311. https://doi.org/10.37567/jie.v10i2.3405

Xie, H., Chu, H.C., Hwang, G.J., & Wang, C.C. (2021). Trends and development in technology-enhanced adaptive/personalized learning: A systematic review of journal publications from 2007 to 2017. Computers & Education, 140, 103599. https://doi.org/10.1016/j.compedu.2019.103599

Yadav, S. (2024). Artificial Intelligence (AI) Integration in Higher Education: Navigating Opportunities and Ethical Frontiers in Education with Advanced Technologies. In Impact of Artificial Intelligence on Society (43-59). Chapman and Hall/CRC. https://doi.org/10.1201/9781032644509-4

Zawacki-Richter, O., Marín, V.I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – Where are the educators? International Journal of Educational Technology in Higher Education, 16, 39. https://doi.org/10.1186/s41239-019-0171-0




Licencia de Creative Commons 

This work is licensed under a Creative Commons Attribution 4.0 International License

Journal of Technology and Science Education, 2011-2026

Online ISSN: 2013-6374; Print ISSN: 2014-5349; DL: B-2000-2012

Publisher: OmniaScience