Evaluation of the impact of the use of generative artificial intelligence in higher education: EPGAI‑ES scale

EVALUATION OF THE IMPACT OF THE USE OF GENERATIVE ARTIFICIAL INTELLIGENCE IN HIGHER EDUCATION: EPGAI-ES SCALE

María-Inmaculada Jiménez-Perona* , Miguel-Ángel Fernández-Jiménez ,
Dolores Pareja-de-Vicente , Juan-José Leiva-Olivencia

University of Malaga (Spain)

Received April 2025

Accepted September 2025

Abstract

Evaluating the impact of Generative Artificial Intelligence (GAI) on higher education is crucial today, not only to enhance teaching and learning dynamics but also to understand its effective integration into educational institutions. This article aims to present an instrument developed to assess the impact, pedagogical implications, and attitudes related to the use of GAI applications in university educational processes. The development of this instrument is based on a study conducted using a descriptive quantitative research methodology, employing a structured survey design. A sample of 471 higher education students was used for validation. The analysis confirmed the quality of the instrument by evaluating its reliability and validity. The questionnaire demonstrates high internal consistency and adequate validation, ensuring coherence, reliability, and alignment with a solid theoretical framework. Tools like the questionnaire developed in this study are essential for assessing how GAI affects and is employed in higher education. Utilizing such tools can promote positive practices in the application of GAI and enhance educational standards.

 

Keywords – Generative artificial intelligence, Higher education, Students, Evaluation questionnaire.

To cite this article:

Jiménez-Perona, M.I., Fernández-Jiménez, M.A., Pareja-de-Vicente, D., & Leiva-Olivencia, J.J. (2025). Evaluation of the impact of the use of generative artificial intelligence in higher education: EPGAI‑ES scale. Journal of Technology and Science Education, 15(3), 699-729. https://doi.org/10.3926/jotse.3459

 

----------

    1. 1. Introduction

Artificial Intelligence (AI) is a field of computer science that focuses on developing systems and algorithms capable of performing tasks that, if performed by humans, would require the use of intelligence. These tasks include pattern recognition, decision making, problem solving, natural language processing, among others. AI is based on various techniques and approaches, such as machine learning, fuzzy logic, neural networks, and robotics, with the aim of mimicking or surpassing human cognitive capabilities in specific areas (Flores-Vivar & García-Peñalvo, 2023).

AI has the potential to significantly transform society (Romero-Rodriguez, Ramírez-Montoya, Buenestado-Fernández & Lara-Lara, 2023), but it is also considered a disruptive technology that can profoundly alter current dynamics in various sectors (Garcia-Peñalvo, Llorens-Largo & Vidal, 2024).

Generative Artificial Intelligence (GAI) is a specialized branch of AI that has experienced considerable growth in recent years, gaining increasing importance. It focuses on the creation of new and original content, using advanced language models capable of generating text, images, music, voices and more. They enable the generation and replication of artificial data using original data sets (Bethencourt-Aguilar, Castellanos-Nieves, Sosa-Alonso & Area-Moreira, 2023). Its ability to produce high quality content and its ease of use have made it widely used in higher education on a daily basis.

Thus, students and teachers can use generative tools to improve teaching and learning processes, facilitating the creation of personalized educational materials and the automation of repetitive tasks (Sanabria-Navarro, Silveira-Pérez, Pérez-Bravo & Cortina-Núñez, 2023).

It is evident that GAI has emerged as a transformative force in various fields, and its impact on higher education is undeniable (Cordón, 2023). However, many educational institutions have not yet managed to effectively integrate generative systems into their academic processes, limiting their ability to take full advantage of the opportunities that this innovative technology offers. Some institutions have adopted varying approaches to this emerging technology, without arriving at a definitive position. While some prohibit the use of these tools altogether, others are actively investigating how students and faculty can harness its potential to enhance the teaching-learning process in higher education, including integrating it into their curricula or developing specific guidelines for its ethical and effective use (Lievens, 2023; Gallent-Torres, Zapata-González & Ortego-Hernando, 2023).

In relation to its projection in education, Wang and Cheng (2021) proposed that research should focus on three fundamental lines:

  • Learning with AI: Using AI tools and applications to support and enhance the educational process, enabling more interactive and adaptive learning experiences. 

  • Learning about AI: Teaching students about AI principles, technologies and applications, preparing the next generation to work with these technologies and understand their implications. 

  • Use of AI for learning to learn: Employ AI to develop metacognitive skills in students, helping them to become more efficient and autonomous. 

In this regard, UNESCO (2019) differentiates three dimensions of linking AI and education: learning how to use AI tools in the classroom, learning about AI and its technical possibilities, and raising awareness of the impact of AI on people’s lives.

The incorporation of these technologies in education presents a significant educational and ethical challenge both for teachers, especially in terms of their design and implementation, which demands a deeper analysis through educational research (García-Martínez, Fernández-Batanero, Fernández-Cerero & León, 2023), without neglecting the importance of addressing issues such as transparency, equity, privacy and other relevant aspects, to ensure that generative artificial intelligence (GAI) is used ethically and its positive impact on the educational environment is maximized (Andión-Gamboa & Cárdenas-Presa, 2023; Cornejo-Plaza & Cippitani, 2023, Cordón, 2023; Flores-Vivar & García-Peñalvo, 2023; Sánchez-Mendiola & Carbajal-Degante, 2023; among others).

The research problem posed in this research focuses on the need for a deeper understanding of the impact of GAI in higher education.

Generative Artificial Intelligence (GAI) is transforming higher education by facilitating learning personalization, content creation and task automation. However, despite its growing adoption in academia, there is still little systematic research assessing its actual impact on the development of academic competencies, the student experience, and the ethical challenges associated with its use. Likewise, students’ perceptions and attitudes towards GAI, as well as their assessment of its benefits and limitations, have been explored to a limited extent, which hinders its effective integration into educational processes.

Although several scales and instruments have been developed to investigate GAI, many of them do not address it from the student’s perspective (Jiménez-Martínez, Gamboa-Rodríguez, Vaughan-Bernal & Moreno-Toledo, 2024; Cascales-Martínez, López-Ros & Gomariz-Vicente, 2024; Ayuso-del-Puerto & Gutiérrez-Esteban, 2022; Saz-Pérez & Pizá-Mir, 2024; Dúo-Terrón, Moreno-Guerrero, López-Belmonte & Marín-Marín, 2023; Prajapati, Kumar, Singh, Prajapati, Thakar, Tambe et al., 2024). Moreover, those that do include this perspective, adopt different approaches, dimensions and objectives (Naváez & Medina-Gual, 2024; Denecke, Glauser & Reichenpfader, 2023; Shaengchart, 2023), which evidences the need to develop specific tools that allow for a more in-depth analysis of the role of GAI in student training. This lack of evidence prevents educational institutions from developing informed strategies for its implementation.

In view of this situation, it is necessary to have rigorous evaluation tools, such as the EPGAI-ES Scale, to accurately measure the impact of GAI in higher education and provide key information for its appropriate pedagogical use. In this sense, it is hypothesized that the implementation of GAI in the teaching-learning processes significantly influences the perception of university students on their academic development and their attitude towards the use of these technologies in the educational environment.

In comparison with previous instruments developed in the field of Generative Artificial Intelligence in education (e.g., Jiménez-Martínez et al., 2024; Cascales-Martínez et al., 2024; Ayuso-del-Puerto & Gutiérrez-Esteban, 2022; Saz-Pérez & Pizá-Mir, 2024; Dúo-Terrón et al., 2023), the EPGAI-ES Scale introduces several innovative features. First, it adopts a multidimensional approach that integrates pedagogical, ethical, and practical aspects of GAI use in higher education. Second, it has been validated through both exploratory and confirmatory factor analyses, ensuring robust psychometric evidence. Third, expert judgment was systematically employed to strengthen content validity. Finally, this scale is specifically designed from the student perspective, a dimension often underexplored in previous research. These distinctive elements reinforce the originality and relevance of the EPGAI-ES Scale. Therefore, this research aims to design an instrument to evaluate the impact, pedagogical implications and attitudes of students towards the use of GIS applications in the educational processes of Higher Education, with the purpose of analyzing how and to what extent these applications are being incorporated in educational institutions.

This work is part of a much broader research belonging to the doctoral thesis entitled “Impact evaluation, pedagogical implications and attitudes derived from the use of generative artificial intelligence applications in teaching-learning processes in higher education” (Jiménez-Perona, s.f.).

2. Methodology

2.1. Design

The study has been carried out using a descriptive quantitative research methodology. A survey-based design was used (Creswell & Creswell, 2018; Alvira, 2011), which allowed for the collection of information on the variability of the different dimensions analyzed in the study (López-Roldán & Fachelli, 2015).

Survey design can be considered a methodology that combines the rigor of experimental designs with the flexibility of observational designs (Buendía, Colás & Hernández, 1994).

2.2. Sample

The selection of participants was carried out by means of non-probability sampling of the casual or accidental type, which made it possible to obtain an accessible sample of university students in different academic years. This technique was used due to the ease of access to available students and the exploratory nature of the study. The sample consisted of a total of 471 students, ranging in age from 18 to 57 years (mean 21.26 years, standard deviation 5.56, mode 20 and median 20). Regarding gender, 77% of the participants were female, 22% male, and 1% other responses (prefer not to say, non-binary, etc.). Regarding the course, 41% of the students were in the second course, followed by 31% in the first course, 18% in the third course, 5% in the fourth course and 5% in Master’s programs. These psychometric characteristics were obtained through an analysis of distributions of the socio-demographic variables, which guaranteed a representative sample of the population studied.

2.3. Procedure

The procedure for the design of the questionnaire was based on an exhaustive review of the literature on the subject. For this review, the guidelines of the PRISMA 2020 Declaration were followed, with publications from 2019 to the present, in English and Spanish, searching in four electronic databases: Scopus, Web of Science, PubMed, and Google Scholar.

This review made it possible to establish the objectives, dimensions and variables of the study on the basis of the data extracted.

In order to have an adequate and representative measurement instrument for the research problem, an ad hoc questionnaire was designed and reviewed using the technique of individual aggregates (Cabero & Llorente, 2013; Escobar & Cuervo, 2008) by 11 professional experts from the Faculty of Education Sciences of the University of Malaga. The validation process of the questionnaire followed the guidelines suggested by the literature, which stresses the importance of ensuring the validity and reliability of measurement instruments through various methods to obtain the coefficients of each of its components.

They were also asked to offer additional suggestions if they considered them pertinent. The evaluations and recommendations for improving each item were compiled in a single document to facilitate their organization and the implementation of the necessary changes.

After making modifications based on the suggestions of the experts, the final questionnaire was created, seeking to ensure that the questionnaire was designed under an organized and clear approach. To this end, the following aspects were taken into account:

  1. 1.An introduction containing the objective of the questionnaire was included. 

  2. 2.An organization by dimensions was made to facilitate the fluidity of understanding and response. 

  3. 3.The questions were organized by sections and numbered for clear identification and easy follow‑up. 

The questionnaire was administered to a sample of the population of interest consisting of 471 university students. The participants responded to the questionnaire online via the Internet so that it would be available 24 hours a day. The students’ responses were recorded in a spreadsheet to facilitate subsequent data analysis.

2.4. Structure of the Instrument

The questionnaire was structured following the guidelines of Azofra (2000), beginning with an introduction explaining the objectives, followed by socio-demographic and general questions, and culminating with the most relevant questions in the main body. The items are organized by themes, maintaining a fluid order and numbered for clear identification.

The questionnaire, which is divided into 3 blocks, consists of a total of 49 items, plus 5 referring to assigned or attributive variables:

  • The first block is made up of 5 items in which additional information is collected regarding age, gender, university where they are studying, studies they are taking and the year in which they are in. 

  • The second block makes up the central body of the questionnaire and contains 42 items on an ordinal scale referring to the first five dimensions of the study, with categories established on a Likert scale from 1 to 5, depending on the degree of agreement shared by the subject with the statement of each item (from 1 = Totally disagree to 5 = Totally agree). 

  • The third block includes 7 items referring to the sixth dimension of study “Types of Generative Artificial Intelligence applications”. In these items the students can select among different types of GGI applications proposed by functions. 

2.5. Dimensions

For the elaboration of the questionnaire, 6 dimensions with a total of 47 variables were taken into account. From these, the different test items were determined.

The dimensions and variables were selected after a rigorous analysis and review of various studies (García‑Peñalvo, et al., 2024; Andión-Gamboa & Cárdenas-Presa, 2023; Sánchez-Vera, 2023; Vera, 2023; UNESCO, 2022; Dempere, Modugu, Hesham & Ramasamy, 2023; Aparicio-Gómez, 2023; Cotrina‑Aliaga, Vera-Flores, Ortiz-Cotrina & Sosa-Celi, 2021), and subsequently adapted to the university context for practical application.

The following table shows the dimensions with their definition, the variables and the questionnaire items designed for each dimension and variable.

Dimensions

Variables

Items

1. Perception of knowledge and attitudes about the GAI

How you understand and respond to generative artificial intelligence (GAI) in your educational experience and in your preparation for future careers.

Familiarity with the GAI concept

8

Knowledge about the operation of the GAI

9

Competencies to use GAI tools in academic tasks

10

Improving learning through GAI

11

Employment impact of knowing how to use the GAI

12

Importance of the GAI for professional development

13

Impact of GAI on learning

14

Influence of the GAI on university education

15

Reliability of GAI information

16

Interest in GAI

17

pedagogical implications of the GAI

18

Participation in GAI training

19

2. GAI benefits and opportunities

How you perceive the positive aspects and opportunities that generative artificial intelligence (GAI) can offer in your educational experience and in your preparation for your future career.

Creative potential of the GAI

20

Academic efficiency with the GAI

21

Agility and efficiency in academic tasks with GAI

22

Effective personalization of learning with GAI

23

Efficient design of educational materials with GAI

24

Access to relevant information with GAI

25

3. Challenges of the GAI

How you perceive the obstacles and difficulties associated with the use of generative artificial intelligence (GAI) in your educational experience and in preparing you for your future professional career

Technological dependence with GAI

26

Impact on learning autonomy with GAI

27

Influence of GAI on formative interaction.

28

Risk of superficial learning with GAI

29

GAI interference with critical thinking and creativity.

30

Resistance to the educational use of GAI

31

Teacher training at GAI

32

4. Use of the GAI

How you perceive and experience the educational use of generative artificial intelligence (GAI) in your learning process.

Use of GAI tools

33

Personalization of information with GAI

34

Immediate feedback from the GAI

35

Time optimization with GAI

36

Use of GAI in the production of multimedia content

37

Use of GAI in information synthesis

38

Use of GAI in text paraphrasing

39

Improved yields with GAI

40

5. Ethical aspects related to the GAI

How he perceives and understands the ethical implications associated with the use of generative artificial intelligence (GAI) in the educational context of higher education and data privacy.

GAI systems security concerns

41

Concerns about biased reporting at GAI

42

Concern about perpetuating stereotypes at GAI

43

Transparency in GAI-based tools

44

Regulation of GAI in university education

45

Accountability and oversight in GAI use

46

Concern over negative impact of GAI

47

6. Type of GAI applications

How you perceive and use different types of generative artificial intelligence (GAI) applications in the context of your higher education, covering a wide range of possibilities including text, image, sound, video and other format applications.

Text-related GAI applications

48

GAI applications related to images

49

GAI applications related to audio and sound

50

Video-related GAI applications

51

GAI applications related to data analysis

52

Translation-related GAI applications

53

Other GAI applications used

54

Table 1. Dimensions, variables and items of the questionnaire

2.6. Reliability and Validity

Reliability and validity are essential quality criteria when the information collected comes from quantitative data. These concepts refer to the properties of the measure rather than the instrument itself (Tójar, 2001). Both are fundamental to ensure that a questionnaire is suitable for assessing the constructs of interest, as they verify both the accuracy and stability of the measurements and the correspondence between the items and the theoretical foundations that support them.

Validity was assessed using two main approaches:

  • Content validity: this type of validity indicates the extent to which an instrument adequately reflects the theoretical domain of the variable to be measured (Tójar, 2001). To ensure this, the questionnaire was reviewed by 10 expert professionals, who carefully evaluated each section and item in relation to the proposed objectives and dimensions. This process helped determine the congruence of the questions with the defined constructs and ensured the relevance of the included items. To systematize the experts’ evaluation, a specific document was prepared and is included in the appendices, compiling their assessments and suggestions. 

  • Construct validity: this refers to the ability of an instrument to measure a theoretical construct and to the way in which the items relate to the latent variables under study (Tójar, 2001). Construct validity was first analyzed using a principal component factor analysis, a multivariate statistical technique that allowed the identification of the underlying dimensions in the relationships among variables. The objective was to determine the most adequate factorial structure of the questionnaire based on the factor loadings of each item. Secondly, and in order to reinforce construct validity, a confirmatory factor analysis was performed. This analysis verified whether the proposed structure adequately fit the empirical data, evaluating goodness-of-fit through indices such as Chi-square, RMSEA, CFI, TLI, and SRMR, thus providing additional evidence regarding the adequacy of the factorial structure of the instrument. 

Reliability refers to the consistency, stability, and accuracy of the measurements obtained. To evaluate this criterion, Cronbach’s α coefficient was used, widely employed to analyze the internal consistency of items. The obtained value indicated an excellent level of internal consistency, suggesting that the items are closely related and provide reliable measurements. Likewise, to obtain a complementary and more precise estimate, McDonald’s ω coefficient was calculated, recommended for Likert-type instruments as it considers the communalities derived from factor loadings (Turra, Villagra, Mellado & Aravena, 2022). The values obtained from both coefficients (Cronbach’s α and McDonald’s ω) consistently indicated high reliability, reinforcing the psychometric robustness of the questionnaire.

Taken together, the validity and reliability analyses provide robust evidence that the EPGAI-ES questionnaire is a valid and reliable measure for assessing perceptions, attitudes, benefits, challenges, and uses of generative artificial intelligence in higher education.

2.7. Data Analysis

Descriptive analysis and principal component factor analysis techniques were used for data analysis. Descriptive analysis involved the study of distributions to explore the data, including the calculation of central tendency and frequency statistics.

Likewise, a factor analysis was performed using the principal components method to obtain information on the validity of the questionnaire and the calculation of Cronbach’s α and McDonald’s ω to evaluate the internal consistency of the set of test items.

In addition, a confirmatory factor analysis was carried out to validate the proposed theoretical structure of the questionnaire, evaluating whether the observed items were adequately grouped into the corresponding latent factors. This analysis was performed using the lavaan package (v. 0.6-19) in R, and allowed estimating the parameters of the unidimensional model of the questionnaire, evaluating the goodness-of-fit through the adjusted Chi-square (SB-o2), RMSEA, CFI, TLI, and SRMR indices. Through confirmatory factor analysis, we sought to verify the adequacy of the data to the factor structure obtained in the exploratory factor analysis (Rodríguez-Armero, Chorot & Sandín, 2023).

The analysis was carried out using the SPSS package (v. 25) for the descriptive analysis and the calculation of internal consistency, while the confirmatory factor analysis was performed in R (v. 4.4.0).

3. Results

Cronbach’s alpha reliability coefficient yielded a value of 0.916 for all 42 variables analyzed, since the variables related to GAI applications were not included and the variables “Training in digital competencies” and “Technological knowledge” were included. This value indicates excellent internal consistency. Complementary analyses showed that internal consistency did not improve significantly when some items were eliminated from the questionnaire. To obtain a more robust estimate of reliability, the McDonald Omega coefficient was calculated using the Maximum Likelihood (ML) estimator, giving a value of 0.907. This value also reflects excellent internal consistency, since the McDonald Omega coefficient is a robust alternative to Cronbach’s alpha that takes into account the structure of the items and the nature of the relationship between them. Both coefficients, Cronbach’s alpha and McDonald’s Omega, suggest that the questionnaire has a solid reliability for measuring the stated dimensions (Colorado, Romero, Salazar, Cabrera & Castillo, 2024). These results reinforce the validity and reliability of the instrument, allowing its use in future research related to the study of digital competencies and technological knowledge in the context of Generative Artificial Intelligence (GAI).

Before performing the principal component factor analysis, data were obtained on two measures related to the adequacy of the criteria for applying the principal component analysis:

  1. 1.The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy, which provides information on sampling adequacy. 

  2. 2.Bartlett’s test of sphericity, which tests the hypothesis that the correlation matrix is equal to the identity matrix. 

 

Kaiser-Meyer-Olkin (KMO) sample adequacy measure

0,914

Bartlett’s test for sphericity

Approximate Chi-square

10452,928

Degrees of freedom

861

Significance (less than)

0,005

Table 2. Values of the KMO indicators and Bartlett’s test

The results obtained indicate that:

  1. 1.The Kaiser-Meyer-Olkin (KMO) sample adequacy index, which compares the magnitudes of the correlation coefficients observed in the correlation matrix with the magnitudes of the anti-image matrix, is 0.914. The high intercorrelation between the correlation matrices is shown by this value, indicating that factor analysis is appropriate and useful for this study. 

  2. 2.According to Bartlett’s test of sphericity, the null hypothesis of sphericity would not be rejected at a significance level greater than 0.05. Since we found a value less than 0.000 in our analysis, we can reject the null hypothesis and consider that the factor analysis is adequate to adjust the variables. When performing the principal component analysis, 8 principal components (factors) were identified. These components, with eigenvalue sλ ≤1, explain 62.75% of the total variance. 

 

Components

Initial Eigenvalues

Sums of squared saturations
of rotated extraction

 

Total

% of Variance

Accumulated

Total

% of Variance

Accumulated

1

10,805

25,727

25,727

10,805

25,727

25,727

2

4,879

11,616

37,343

4,879

11,616

37,343

3

2,641

6,288

43,631

2,641

6,288

43,631

4

2,181

5,192

48,823

2,181

5,192

48,823

5

2,000

4,762

53,585

2,000

4,762

53,585

6

1,407

3,350

56,935

1,407

3,350

56,935

7

1,332

3,171

60,106

1,332

3,171

60,106

8

1,110

2,644

62,750

1,110

2,644

62,750

Table 3. Results of principal component analysis

For the extraction of these factors, the Principal Component Analysis method was used through a Varimax rotation with Kaiser normalization that converged in 8 iterations.

Absolute values below 0.449 were eliminated. In this way, each of the following factors was established and defined.

To facilitate the understanding of the instrument’s structure and the psychometric results, Figure 1 has been included, providing a schematic representation of the EPGAI-ES questionnaire. It shows the six initial dimensions and their correspondence with the eight-factor structure obtained through the exploratory and confirmatory analyses. This visual representation offers a clear and concise overview of the instrument’s organization and its underlying factors, supporting the interpretation of the tables presented below.

Tables 4 to 11 show the result of the regrouping of these eight factors with the factor loadings of each of the items that comprise them.

Table 4 includes the different items with their factor loadings for the factor labeled Benefits of GAI Benefits in Learning. It highlights how GAI can enhance various aspects of student learning that can significantly improve students’ educational experience, i.e., how GAI can help them develop their creativity (item 20), perform academic tasks in an innovative way (item 21), how it can make the completion of their academic tasks more agile and efficient (item 22), how it can effectively personalize their learning (item 23), how it can facilitate the quick and efficient design of educational materials (item 24), and help them access information relevant to their learning process (item 25).

 

Figure 1. EPGAI-ES Instrument: Initial dimensions and factor structure

Factor 1: Benefits of GAI on learning

Items

Factor Loadings

20

GAI can help me develop my creativity.

0.632

21

Using the GAI allows me to perform academic tasks in an innovative way.

0.757

22

GAI can help me to be agile and efficient in performing academic tasks.

0.733

23

GAI can help personalize my learning effectively.

0.734

24

GAI can facilitate the rapid and efficient design of educational materials.

0.728

25

The GAI can help me access relevant information in my personal learning construction process.

0.707

Table 4. Factor 1 Benefits of GAI on learning

Table 5 shows the items that saturate factor 2 “Perception of the importance of GAI”. This factor reflects how students perceive the relevance and impact of GAI in their education and professional development, i.e., how it can contribute to improving the quality of their learning (item 11), how knowledge of these technologies will provide them with more job opportunities (item 12), how it will influence their professional development (item 13) and how it can allow them to transform their educational experience (item 14). Likewise, the importance of learning more about GAI (item 17) and the perception that the pedagogical implications of its use should be addressed in the classroom (item 18).

Factor 2: Perception of the importance of the GAI

Items

Factor Loadings

I11

GAI is a technology that can improve the quality of my own learning.

0.598

I12

Knowing how to use the GAI will allow me to have more job opportunities.

0.755

I13

Knowing how to use the GAI is important for my professional development.

0.797

I14

GAI is a technology that can change my learning experience.

0.730

I15

The GAI is currently influential in university education.

0.481

I17

I am interested in learning more about the GAI.

0.636

I18

The pedagogical implications of the use of GAI should be addressed in the classroom.

0.647

Table 5. Factor 2 Perception of the importance of the GAI

Factor 3, “Perceived risks of GAI”, appears in Table 6 with the factor loadings of each of the items that comprise it. This factor describes the concerns that students may have regarding the possible negative impacts of the use of GAI on their learning process. Thus, it takes into account their concern about the technological dependence that may be generated by the use of GAI (item 26), as well as the possible negative effect that this may have on their ability to solve problems autonomously (item 27). In addition, the perception that excessive use of GAI can lead to superficial learning (item 29) and can interfere with the development of critical thinking and creativity (item 30) is also taken into account (item 30).

Factor 3: Perceived GAI risks

Items

Factor Loadings

I26

GAI generates dependence on technology.

0.725

I27

Excessive use of GAI can affect my ability to solve problems and tasks autonomously.

0.765

I28

GAI can influence the degree of interaction in the formative process.

0.777

I29

GAI can lead to superficial learning.

0.786

I30

GAI can interfere with the development of critical thinking and creativity.

0.797

Table 6. Factor 3 Perceived GAI risks

The following table shows the items together with their factor loadings that are included in the factor “Practical Uses of the GAI”. This factor takes into account the use of the GAI to carry out various tasks and activities related to the students’ studies and academic work. It includes items that have to do with the use of GAI to produce content, such as texts, infographics or images (item 37), to synthesize information (item 38), to paraphrase or reformulate texts (item 39) in order to optimize the creation of academic content, or to improve the quality of productions already made, such as texts or images (item 40).

Factor 4: Practical uses of GAI

Items

Factor Loadings

 

I33

I usually use GAI tools to carry out my activities and tasks at the University.

0.689

 

I34

The GAI can provide information and resources tailored to my interests.

0.523

 

I35

The GAI provides immediate feedback to improve my learning.

0.566

 

I36

GAI can help me optimize time and allow me to focus on more meaningful aspects of learning.

0.463

 

I37

One of the uses I make of the GAI has to do with the production of content (textual, infographic, image, etc.).

0.594

 

I38

One of the uses I make of the GAI has to do with synthesizing information.

0.760

 

I39

One of the uses I make of the GAI has to do with paraphrasing or rephrasing the wording of texts.

0.772

 

I40

One of the uses I make of GAI has to do with improving the quality of a production (text, image, etc.) that has already been produced.

0.687

 

Table 7. Factor 4 Practical uses of the GAI

The factor “Knowledge and Skills in GAI”, shown in the following table with its corresponding items and factor loadings, reflects the students’ preparation and skills in relation to the use of GAI in their studies and academic activities. It includes items related to having received training aimed at improving their digital competencies (item 6), to the level of familiarity with the concept of GAI (item 8) and a solid knowledge of how this technology works (item 9), and to the competencies to use GAI tools in their academic tasks (item 10).

Factor 5: GAI knowledge and competencies

Items

Factor Loadings

I6

I have received some training aimed at improving my digital skills.

0.455

I7

My technological skills are adequate to deal effectively with digital applications in my studies.

0.571

I8

I am familiar with the GAI concept.

0.821

I9

I am very knowledgeable about how the GAI works.

0.840

I10

I am competent to use GAI tools in my academic tasks.

0.780

Table 8. Factor 5 Knowledge and skills in GAI

The following table shows the factor “Concerns about GAI” that reflects students’ concerns about several critical aspects of the use of GAI in the educational context, such as the reliability of the information produced by GAI (item 16), the security of personal data (item 41), the possibility of generating biased information (item 42), and the potential for perpetuating stereotypes and social prejudices (item 43).

Factor 6: Concerns about GAI

Items

Factor Loadings

I16

I am concerned that the information provided by GAI is not reliable.

0.449

I41

I am concerned about the security of my personal data on GAI systems.

0.787

I42

I am concerned about the biased information that GAI may provide.

0.812

I43

I am concerned that the GAI may perpetuate existing stereotypes and prejudices in society.

0.826

Table 9. Factor 6 Knowledge and skills in GAI

The factor “Regulation and Ethics in GAI”, which appears in the following table, emphasizes the need to establish regulations and ethical supervision to guide the responsible use of GAI in the educational environment. Thus, this factor includes the students’ perception of the existing resistance to the use of GAI in education (item 31), the need to regulate its use in university environments (item 45), the concern for establishing accountability and oversight mechanisms to prevent abuses and ensure ethical practices in the use of GAI (item 46) and the concern about the possible negative impact of GAI on socioeconomic and cultural aspects, such as loss of jobs, digital exclusion and homogenization of thinking (item 47).

Factor 7: Regulation and ethics at GAI

Items

Factor Loadings

I31

There is resistance to the use of GAI in the educational context.

0.503

I45

The use of GAI in university education should be regulated.

0.637

I46

Accountability and oversight mechanisms must be established to prevent abuses and ensure ethical practices in the use of GAI.

0.729

I47

I am concerned about the impact GAI may have on job losses, digital exclusion or homogenization of thinking.

0.505

Table 10. Factor 7 Regulation and ethics in GAI

The following table includes the factor “Trust and Transparency in GAI”, which addresses the relationship between student confidence in GAI and the transparency of these technologies, as well as the training they have received. This factor highlights participation in training activities related to GAI (item 19) and the perception of teacher training in the use of these technologies (item 32) as key elements for increasing student confidence. In addition, the importance of making GAI-based tools transparent in their processes is raised (item 44), since the clarity and comprehensibility of these processes help to generate greater confidence in their use.

Factor 8: Trust and Transparency in GAI

Items

Factor Loadings

I19

I participate or have participated in training activities at GAI.

0.579

I32

The teaching staff is trained in GAI in the university context to favor the educational experience of the students.

0.809

I44

GAI-based tools are transparent in their processes.

0.567

Table 11. Factor 8 Trust and Transparency in GAI

The following table shows the analysis of the internal consistency (reliability) of the total scale and of the factors resulting from the factor analysis.

Internal consistency of the scale factors

Factors

α Cronbach’s

1

Benefits of GAI in learning

0.892

2

Trust and Transparency at GAI

0.882

3

Perceived risks of the GAI

0.864

4

Practical uses of GAI

0.871

5

Knowledge and skills in GAI

0.783

6

Concerns about the GAI

0.792

7

Regulation and ethics at GAI

0.684

8

Trust and Transparency at GAI

0.509

Full scale

0.916

Table 12. Internal consistency analysis of the scale factors

Most of the factors show very good to excellent internal consistency, with Cronbach’s alpha values above 0.7, indicating that the items within each factor are consistent and reliable. The factors “Regulation and Ethics in GAI” and “Trust and Transparency in GAI” (the latter with a lower value) show lower internal consistency, especially the latter, suggesting the need to revise the items to improve their reliability.

The entire scale has excellent internal consistency, indicating that all items, when considered together, are very reliable in measuring the constructs related to GAI.

To assess the validity and reliability of the questionnaire used in this study, a confirmatory factor analysis was performed using R software. The purpose of this analysis was to confirm the theoretical structure of the instrument and to verify its fit to the data collected, ensuring that the latent variables identified corresponded to the items of the questionnaire. Through the confirmatory factor analysis, we sought to verify the adequacy of the data to the 8-factor structure obtained in the exploratory factor analysis (Rodríguez-Armero et al., 2023).

The first step of the analysis consisted of preparing and organizing the data. The data set, structured in a “data frame” in R, contained the responses of the 471 participants, where each row represented an individual and each column corresponded to an item of the questionnaire. A thorough review was carried out to detect and address missing values or inconsistencies in the data prior to analysis.

A multidimensional factorial model was specified, given that the questionnaire measured several dimensions related to the constructs analyzed. The model was defined using the lavaan library, widely recognized in the estimation of structural equation models. The maximum likelihood (ML) method and the NLMINB optimization procedure were used to estimate the parameters.

Once the model was specified, its fit was evaluated using several standard fit metrics:

  • Chi-square (χ²): Evaluated the discrepancy between the model and the data. 

  • CFI (Comparative Fit Index) and TLI (Tucker-Lewis Index): Evaluated the comparative fit of the model. 

  • RMSEA (Root Mean Square Error of Approximation): Indicated the approximation error of the model. 

  • SRMR (Standardized Root Mean Residual): Indicated the standard error of the residuals. 

Special attention was paid to the fit indices, considering that a good fit would be achieved with CFI and TLI values higher than 0.90, RMSEA lower than 0.08 and SRMR lower than 0.08.

The results obtained are presented in Table 13, together with the goodness-of-fit criteria and the corresponding interpretation.

According to the literature (Hair, Anderson, Tatham & Black, 1999; Byrne, 2016; Moral-de-la-Rubia, 2016), these values suggest a moderate fit of the model to the observed data. Although the CFI and TLI do not reach the 0.90 threshold recommended for a good fit, the RMSEA and SRMR are within acceptable ranges (≤ 0.08), indicating an adequate fit (Escobedo, Hernández, Estebané, & Martínez, 2016).

Once the model fit was evaluated, the next step consisted of interpreting the coefficients of the indicators of each latent factor.

The parameter estimates obtained for each of the factors are presented below, followed by their interpretation. In each of the tables, the relationship between each of the items of the questionnaire and their impact on each dimension can be observed.

Model fit metrics and their interpretation

Metrics

Value

Good Fit Criteria

Interpretation

Chi-square

2219.335

Not applicable

A high value indicates that the model does not fit perfectly, although this metric is sensitive to large samples.

Degrees of freedom

791

Not applicable

It is used to interpret the Chi-square; higher values may indicate more complex models.

P-value (Chi-square)

0.000

> 0.05

A significant value suggests that the model differs from the observed data, although the Chi-square is sensitive to sample size.

Comparative Fit Index (CFI)

0.857

≥ 0.90 (Good) /

≥ 0.95 (Excellent)

Moderate setting; does not meet the recommended threshold, but still reasonable.

Tucker-Lewis Index (TLI)

0.844

≥ 0.90 (Good) /

≥ 0.95 (Excellent)

Moderate adjustment, although less than optimum level.

Root Root Mean Square Error of Approximation (RMSEA)

0.062

≤ 0.08 (Acceptable) /

≤ 0.05 (Good)

Good fit, as it is within the acceptable range.

Confidence interval of RMSEA (90%)

[0.059, 0.065]

Within the range ≤ 0.08

Reaffirms that the RMSEA indicates an adequate fit.

P-value for RMSEA ≤ 0.050

0.000

> 0.05 indicates good fit

Indicates that the RMSEA is probably not less than or equal to 0.050, suggesting a moderate fit.

P-value for RMSEA ≥ 0.080

0.000

< 0.05 indicates good fit

Indicates that the RMSEA is not likely to be greater than or equal to 0.080, confirming that the fit is adequate.

Standardized Quadratic Residual (SRMR)

0.078

≤ 0.08 (Good) /

≤ 0.05 (Excellent)

Within the acceptable threshold, suggesting an adequate fit.

Table 13. Model fit metrics and their interpretation

Learning Benefits

Indicator

Estimate

Standard Error

Z-value

p-value

Standard Estimate

I20

1.000

0.751

0.626

0.000

0.909

I21

1.209

0.082

14.717

0.000

0.840

I22

1.145

0.080

14.395

0.000

0.814

I23

1.166

0.081

14.371

0.000

0.812

I24

0.960

0.072

13.340

0.000

0.734

I25

1.022

0.074

13.894

0.000

0.775

Table 14. Indicators related to the learning benefits dimension

The results in Table 14 indicate that students highly positively perceive the learning benefits that GAI can provide, with p-values of 0.000 in all items, which shows that the perceptions are statistically significant. GAI is seen as a tool that favors the development of creativity (with a moderate Z-value), boosts innovation in academic tasks, improves efficiency and agility in the completion of tasks, facilitates the personalization of learning, and helps in the fast and efficient design of educational materials. In addition, students consider that the GAI facilitates access to relevant information for their learning process.

Perception of importance

Indicator

Estimate

Standard Error

Z-value

p-value

Standard Estimate

I11

1.000

0.757

0.720

0.000

0.757

I12

1.154

0.070

16.559

0.000

0.790

I13

1.198

0.070

17.198

0.000

0.821

I14

1.041

0.060

17.305

0.000

0.826

I15

0.743

0.072

10.244

0.000

0.491

I17

0.977

0.067

14.682

0.000

0.701

I18

0.997

0.066

15.170

0.000

0.725

Table 15. Indicators related to the dimension of perception of importance

The results in Table 15 show that GAI is perceived as a technology that improves the quality of learning (Item I11, with a value of standard estimate of 0.757), and is considered crucial for professional development and job opportunities (Items I12 with 0.790 and I13 with 0.821). In addition, its ability to transform the learning experience is recognized (I14 with a value of 0.826) and there is widespread interest in learning more about GAI (I17 with 0.701). Although the influence of GAI in university education (I15 with a value of 0.491) has a lower relationship compared to other aspects, it is emphasized that it should be addressed pedagogically in the classroom (I18 with 0.725).

Perceived risks

Indicator

Estimate

Standard Error

Z-value

p-value

Standard Estimate

I26

1.000

0.727

0.658

0.000

0.727

I27

1.170

0.082

14.191

0.000

0.779

I28

1.060

0.074

14.244

0.000

0.783

I29

1.127

0.082

13.733

0.000

0.747

I30

1.191

0.083

14.258

0.000

0.784

Table 16. Indicators related to the perceived risk dimension

The results presented in Table 16 show that students perceive various risks among the risks that stand out are the technological dependence caused by the GAI (Item I26), with a significant loading of 0.727. Likewise, it is observed that the GAI can affect students’ ability to solve problems autonomously (Item I27, with a standard estimate of 0.779) and modify the level of interaction in the educational process (Item I28, with 0.783). Other relevant risks include the perception that GAI could favor superficial learning (Item I29, with 0.747) and negatively affect the development of critical thinking and creativity (Item I30, with 0.784). These results reflect a widespread concern among students about the adverse effects of GAI on their autonomy, depth of learning, and critical analysis skills.

Practical uses

Indicator

Estimate

Standard Error

Z-value

p-value

Standard Estimate

I33

1.000

0.836

0.701

0.000

0.836

I34

0.835

0.056

15.010

0.000

0.749

I35

0.943

0.061

15.574

0.000

0.780

I36

0.892

0.061

14.653

0.000

0.730

I37

0.954

0.074

12.852

0.000

0.636

I38

1.092

0.075

14.482

0.000

0.721

I39

0.981

0.081

12.058

0.000

0.595

I40

0.859

0.078

11.052

0.000

0.544

Table 17. Indicators related to the practical uses dimension

The results presented in Table 17 show how students employ various Generative Artificial Intelligence (GAI) tools in their academic activities. Item I33, which refers to the use of GAI tools to perform university tasks, stands out with a significant loading of 0.836, suggesting that this type of use is fundamental for students. Other important aspects include the ability of GAI to provide information and resources tailored to student interests (Item I34, with a standard estimate of 0.749), and its role in providing immediate feedback for learning improvement (Item I35, with a value of 0.780). In addition, GAI is perceived as an effective tool for optimizing time and allowing students to focus on more meaningful aspects of their learning (Item I36, with a value of 0.730), as well as in the production of content, whether textual, infographic, or visual (Item I37, with 0.636). Other notable uses include their ability to synthesize information (Item I38, with 0.721), paraphrase or rephrase texts (Item I39, with 0.595), and improve the quality of productions already made (Item I40, with 0.544). These results indicate that students are using GAI in a comprehensive manner, not only to improve the efficiency and quality of their academic work, but also to personalize their learning process and make it more productive.

Knowledge and skills

Indicator

Estimate

Standard Error

Z-value

p-value

Standard Estimate

I6

1.000

0.327

0.297

0.000

0.327

I7

1.141

0.220

5.184

0.000

0.398

I8

2.914

0.467

6.246

0.000

0.851

I9

2.826

0.451

6.271

0.000

0.888

I10

2.632

0.424

6.203

0.000

0.803

Table 18. Indicators related to the knowledge and skills dimension

The results in Table 18 reflect the importance of GAI knowledge and competencies among students. Item I8, which assesses familiarity with the concept of GAI, has the highest standard loading (0.851), suggesting that students have a good level of knowledge about this concept. Similarly, item I9, related to knowledge about how GAI works, presents a standard load of 0.888, standing out as the strongest indicator and showing that students possess a high understanding of the technology. In terms of competencies, item I10, which measures the ability to use GAI tools in academic tasks, has a standard loading of 0.803, indicating that students are quite capable of integrating GAI in their academic activities. However, item I7, which assesses general technological knowledge for using digital applications in studies, presents a lower standard loading (0.398), suggesting that, although students consider themselves competent in the use of technological tools, their general readiness to use digital applications could be improved. Finally, item I6, which addresses training in digital competencies, has the lowest standard loading (0.327), indicating that, although students have good knowledge and skills in GAI, prior training in digital competencies in general is an area that needs to be strengthened. Thus, these results indicate that, although students show a solid understanding and proficiency in the use of GAI, there is room for improvement in training in broader digital competencies.

Concerns about the GAI

Indicator

Estimate

Standard Error

Z-value

p-value

Standard Estimate

I16

1.000

0.537

0.470

0.000

0.537

I41

1.909

0.196

9.736

0.000

0.791

I42

1.822

0.184

9.918

0.000

0.852

I43

1.649

0.176

9.394

0.000

0.718

Table 19. Indicators related to the dimension of concerns about GAI

The results in Table 19 evidence concerns about students’ GAI, with all items showing p-values of 0.000, indicating that the relationships are significant. Students express a number of significant concerns related to the safety, objectivity, and potential social effects of GAI, suggesting that these issues need to be addressed to foster greater confidence in its use. In particular, items I41 (concern about the security of personal data in GAI systems) and I42 (concern about biased information provided by GAI) exhibit the highest standard loadings of 0.791 and 0.852, respectively, suggesting that these concerns are highly prevalent among students. These concerns reflect a fear for potential risks associated with the privacy and objectivity of GAI-generated information. Item I43, which addresses concerns about the perpetuation of stereotypes and biases in GAI, also shows a significant loading of 0.718, indicating that students are aware of the social risks that GAI could bring in terms of reproducing existing biases. Finally, item I16, which assesses concern about the reliability of the information provided by the GAI, has a standard loading of 0.537, being the lowest of the dimension, but still reflects that a considerable proportion of students have doubts about the reliability of the information generated by the GAI.

Ethical regulation

Indicator

Estimate

Standard Error

Z-value

p-value

Standard Estimate

I31

1.000

0.334

0.344

0.000

0.334

I45

2.187

0.341

6.416

0.000

0.640

I46

2.254

0.342

6.598

0.000

0.727

I47

2.404

0.366

6.576

0.000

0.714

Table 20. Indicators related to the ethical regulation dimension

The results in Table 20 highlight concerns related to regulation and ethics in GAI, where all items present p-values of 0.000, implying that the relationships are significant. these results underscore the urgent need for clear ethical regulation around the use of GAI in university education, addressing both concerns about its social impact and the mechanisms needed to ensure its responsible and fair use. Item I45, which points to the need to regulate the use of GAI in university education, shows a standard loading of 0.640, reflecting a strong concern for establishing regulatory frameworks in this area. Item I46, which highlights the importance of implementing accountability and oversight mechanisms to prevent abuses, has a standard load of 0.727, indicating that students consider it essential to have ethical control measures in the use of GAI. In addition, item I47, which addresses concerns about the impact of GAI on job loss, digital exclusion, or homogenization of thinking, has a standard loading of 0.714, suggesting that students are aware of the social and economic risks associated with the use of GAI. Finally, item I31, which assesses resistances to the use of GAI in the educational context, presents a standard loading of 0.334, which, although the lowest of the dimension, still reflects that there is some resistance to its adoption due to concerns about its ethical implementation.

Trust and transparency

Indicator

Estimate

Standard Error

Z-value

p-value

Standard Estimate

I19

1.000

0.640

0.547

0.000

0.640

I32

0.814

0.145

5.627

0.000

0.490

I44

0.759

0.136

5.599

0.000

0.486

Table 21. Indicators related to the trust and transparency dimension

The results in Table 21 show that, although some students have received training in GAI, there are still doubts about the preparation of teachers and the transparency of the tools, which may affect confidence in their use in the educational context. Item I19, which measures participation in GAI training activities, shows the highest standard load (0.640), suggesting that a relevant number of students have had some formative exposure to this technology. However, item I32, which evaluates the perception of teacher training on GAI to improve the educational experience, presents a standard load of 0.490, indicating that students perceive insufficient preparation by teachers in this area. Similarly, item I44, which addresses the transparency of GAI-based tools, presents the lowest loading (0.486), suggesting a moderate level of distrust regarding the clarity and openness of the processes in these technologies.

Results of the analysis of covariance

Variable 1

Variable 2

Estimate

Standard Error

Z-value

P-value

Std. vl

Std. all

Learning benefits

Perception of importance

0.445

0.048

9.252

0.000

0.783

0.783

Learning benefits

Perceived risks

0.071

0.029

2.428

0.015

0.129

0.129

Learning benefits

Practical uses

0.468

0.052

8.958

0.000

0.746

0.746

Learning benefits

Knowledge and skills

0.076

0.018

4.179

0.000

0.309

0.309

Perception of importance

Perceived risks

0.115

0.030

3.815

0.000

0.209

0.209

Perception of importance

Practical uses

0.385

0.045

8.624

0.000

0.609

0.609

Perception of importance

Knowledge and skills

0.089

0.020

4.532

0.000

0.360

0.360

Perception of importance

Regulation and ethics

0.077

0.018

4.168

0.000

0.303

0.303

Perception of importance

Trust and transparency

0.122

0.036

3.445

0.001

0.253

0.253

Perceived risks

Practical uses

0.075

0.033

2.304

0.021

0.123

0.123

Perceived risks

Knowledge and skills

0.035

0.014

2.547

0.011

0.148

0.148

Perceived risks

Concerns about the GAI

0.182

0.030

6.121

0.000

0.467

0.467

Perceived risks

Ethical regulation

0.155

0.028

5.560

0.000

0.637

0.637

Practical uses

Knowledge and skills

0.109

0.023

4.692

0.000

0.400

0.400

Practical uses

Ethical regulation

0.066

0.019

3.509

0.000

0.236

0.236

Practical uses

Trust and transparency

0.172

0.041

4.157

0.000

0.322

0.322

Knowledge and skills

Concerns about the GAI

0.021

0.010

2.070

0.038

0.119

0.119

Knowledge and skills

Trust and transparency

0.075

0.020

3.764

0.000

0.360

0.360

Concerns about the GAI

Ethical regulation

0.105

0.021

5.028

0.000

0.588

0.588

Concerns about the GAI

Trust and transparency

0.103

0.028

3.694

0.000

0.300

0.300

Ethical regulation

Trust and transparency

0.037

0.017

2.182

0.029

0.175

0.175

Table 22. Results of the analysis of covariance

The analysis of the covariances between the factors has allowed a more detailed understanding of the interdependent relationships between the latent variables. This is key to interpreting how the different dimensions of perceived GAI influence each other. By examining the observed covariances, it is possible to identify how some variables affect others and how these interactions may impact the results.

The analysis of covariance shows key relationships between various perceptions of GAI. First, the significant relationship between Learning Benefits and Perceived Importance of GAI (estimate = 0.445, Z = 9.252, P = 0.000) indicates that when perceived benefits are higher, so is the valuation of the technology. In contrast, the association between Perceived Risks and Practical Uses (estimate = 0.075, Z = 2.304, P = 0.021) is modest but significant, suggesting that perceived risks do not have a considerable impact on the adoption of GAI in the classroom.

Furthermore, the relationship between Knowledge and Competencies and Concerns about GAI (estimate = 0.021, Z = 2.070, P = 0.038) reveals that the higher the knowledge, the lower the concerns about technology. These relevant covariances highlight that there is a strong interdependence between factors such as perceived importance, learning benefits, ethical regulation, and knowledge and skills. Specifically, students who view GAI as important and useful tend to be more concerned about regulatory risks and ethics, while developing greater knowledge and competence about the technology. Trust in transparency is also closely linked to these variables, which emphasizes the importance of generating a clear and accessible educational environment on GAI-related topics.

Finally, the reliability of the factors was evaluated using Cronbach’s alpha. This analysis provided valuable information on the internal consistency of the scales, which is crucial to confirm the validity of the dimensions identified in the model.

Reliability of factors

Factor

Cronbach’s alpha

McDonald’s Omega

Learning Benefits

0.895

0.895

Perception of Importance

0.889

0.889

Perceived Risks

0.864

0.864

Practical Uses

0.849

0.849

Knowledge and Competencies

0.792

0.792

Concerns about the GAI

0.817

0.817

Ethical Regulation

0.729

0.729

Trust and Transparency

0.509

0.509

Table 23. Reliability of the factors

The results obtained show that the “Learning Benefits” and “Perceived Importance” factors presented high levels of reliability, indicating excellent consistency in these dimensions. The “Perceived Risks” and “Practical Uses” factors also showed adequate reliability, supporting the consistency of these scales. The “Knowledge and Skills” factor showed moderate reliability, suggesting adequate internal consistency. On the other hand, the “Ethical Regulation” factor showed acceptable reliability, while “Trust and Transparency” showed low values, suggesting that this scale could benefit from adjustments in its formulation or measurement. Overall, these results reinforce the validity and robustness of the model, highlighting the internal consistency of the dimensions identified through confirmatory factor analysis.

After completing the confirmatory factor analysis and although some fit indices, such as the CFI (0.857) and TLI (0.844), do not reach the recommended threshold of ≥0.90, the validity of the model is justified by the results obtained in the confirmatory factor analysis. Although these indices suggest a moderate fit, the model still presents an adequate structure when considering other indicators, such as the RMSEA (0.062) and the SRMR (0.078), which are within acceptable ranges (≤0.08). In addition, the factor loadings of the items are high and statistically significant, supporting the robust relationship between the latent factors and their indicators. These findings provide strong support for the validity of the model, indicating that, despite some indices falling short of ideal thresholds, the structure of the model is consistent and adequate both conceptually and empirically.

4. Discussion

The validation process has shown that the EPGAI-ES instrument has sound psychometric characteristics. The high internal consistency obtained confirms that the questionnaire items adequately measure the same construct, which is fundamental to guarantee the reliability of the results. This finding coincides with different studies (Morán-Ortega, Ruiz-Tirado, Simental-López & Tirado-López, 2024; Sánchez-Prieto, Izquierdo-Álvarez, Moral-Marcos & Martínez-Abad, 2024) that highlight the importance of item homogeneity as an indicator that they all measure the same thing and that there is high internal consistency in educational assessment instruments (Jin, Yan, Echeverria, Gašević & Martinez-Maldonado, 2025).

The content validation carried out has followed rigorous methodologies similar to those described in the specialized literature (Perezchica-Vega, Sepúlveda-Rodríguez & Román-Méndez, 2024), where the relevance of expert judgment and factor analysis to confirm the underlying theoretical structure of the instrument is emphasized. This methodological process ensures that the questionnaire effectively measures what it intends to measure: the impact of GAI in the university context.

When comparing our instrument with others developed in the emerging field of GAI assessment in educational contexts, we observed methodological similarities with studies focused on the validation of TPACK questionnaires adapted to GAI (Saz-Pérez, Pizà-Mir & Carrió, 2024), as well as those addressing the perception and use of specific tools (Obenza, Salvahan, Rios, Solo, Alburo & Gabila, 2024). We also identified works designed for other educational levels and contexts (Alpizar-Garrido & Martínez-Ruiz, 2024). However, the EPGAI-ES Scale stands out because it is specifically designed to assess the impact, pedagogical implications and attitudes towards GAI at the university level.

In comparison with other available scales, the EPGAI-ES Scale offers a broader and more integrated framework, combining pedagogical, ethical, and practical dimensions of GAI use in higher education. Its rigorous psychometric validation and explicit focus on the student perspective constitute an added value that distinguishes it from similar instruments in the field.

It is worth noting the large number of studies based on systematic reviews on GAI (Valencia, Barragán, Ledesma & Moraima, 2024; Trujillo, Pozo & Suntaxi, 2025; among others), providing a broad and rigorous view of its impact, applications and challenges in different educational settings. These works not only allow us to identify trends and areas for improvement, but also provide a solid basis for the development of specific tools, such as the questionnaire presented in this study.

The validation results of this instrument have significant pedagogical implications for understanding how GAI is transforming higher education. The validated questionnaire makes it possible to systematically assess aspects that various studies identify as essential: the potential to improve the personalization of learning, the optimization of digital resources and the promotion of educational innovation.

In addition, the factorial structure obtained reflects the multidimensionality of the phenomenon of GAI in higher education, coinciding with the literature that points out the need to consider pedagogical as well as ethical and technological aspects in its implementation. This instrument provides a solid empirical basis for university institutions to evaluate and guide the integration of GAI in their educational practices.

Despite the positive results obtained in the validation, it is important to recognize certain limitations. The sample, although significant, could be expanded to include a greater diversity of academic disciplines and/or university institutional contexts, which would allow the results to be generalized with greater confidence. In addition, considering the rapid evolution of GAI technologies, the instrument will require periodic updates to maintain its relevance. Another limitation could be the social desirability bias in student responses, a common aspect in self-administered instruments that address emerging technologies and their impact on learning in the context of higher education. Future studies should complement the use of the questionnaire with qualitative methodologies to triangulate results and obtain a deeper and more holistic understanding.

5. Future Lines and Orientations

The validation results of the EPGAI-ES instrument open several lines for future research. It would be valuable to carry out longitudinal studies to evaluate how the impact of GAI on educational processes evolves over time and with a longitudinal pedagogical approach. Similarly, another future direction is the possibility of adapting and validating the instrument for other educational contexts, such as compulsory secondary education or intermediate or higher vocational training.

Future research could also explore correlations between the results of the instrument and variables such as academic performance, the development of digital competencies, attention to specific educational support needs, or preparation for the labor market and improved employability. In this sense, it will be positive to complement this quantitative instrument with qualitative and/or narrative approaches that allow for a more developed and inclusive analysis of students’ and teachers’ experiences with GAI.

Likewise, future research should explore the cross-cultural and disciplinary applicability of the EPGAI-ES Scale. Since higher education systems vary in their level of digital maturity and in the integration of artificial intelligence, testing the scale in different cultural and institutional contexts will strengthen its external validity. The inclusion of the full questionnaire in this article provides researchers and practitioners with a practical tool that can be adapted and validated in diverse educational settings.

6. Conclusion

After the data analysis performed, it can be concluded that the questionnaire used provides a reliable and valid measure. Firstly, it shows high internal consistency, indicating that the items are coherent with each other and consistently measure the same construct. Secondly, it presents an adequate content validation, ensuring that the questions cover the various dimensions of GAI use, as well as a construct validation that confirms that the organization of the questionnaire is aligned with a well-defined theoretical structure supported by the literature.

The factor analysis carried out reveals that the profiles described in previous studies are not exactly reflected with the same denominations. However, all the activities considered as study variables are present and relevant to explain the perceptions, attitudes and practices of university students in relation to GAI. The functional organization of the theoretical model shows some overlaps and difficulties in clearly dissociating some functions. Based on this analysis, a reorganization of the activities into eight components or factors is proposed, which are described in the results and provide an empirical representation of the key dimensions on which good practices in the use of GAI are focused.

Although some fit indices do not meet ideal thresholds, confirmatory factor analysis supports the validity of the model. Other indicators are within acceptable ranges, and the factor loadings are significant, confirming the consistency and validity of the model.

Therefore, we can affirm that the questionnaire accurately reflects the study variables, establishing itself as an adequate tool for analyzing the impact and practices related to GAI in higher education.

The EPGAI-ES Scale offers higher education institutions a scientifically validated tool to assess how GAI is transforming educational processes. Its application can significantly contribute to informed decision making about the implementation of these technologies in university contexts, allowing to maximize their benefits while adequately addressing the challenges they pose, especially those related to ethics and the honest use of the data generated.

The availability of these tools can promote good practices in the use of GAI and contribute to improving educational quality. Their effective implementation can facilitate a more integrated and conscious approach to the use of advanced technologies in education, benefiting both students and teachers.

In a context of rapid technological evolution, tools such as the one validated in this study are essential for understanding and effectively guiding the integration of GAI in higher education. The future challenge is to keep the instrument updated to reflect the constant evolution of these technologies and their impact on educational processes.

The EPGAI-ES Scale is not only a measurement instrument but also a tool to foster critical reflection on the responsible integration of emerging technologies in higher education. In a context of accelerated technological transformation, instruments like this are essential to ensure that educational innovation is grounded in empirical evidence and pedagogically informed considerations.

The availability of this validated tool can catalyze the development of more conscious and effective educational practices, contributing to the ethical and pedagogically grounded use of the transformative potential of Generative Artificial Intelligence in twenty-first-century higher education. Finally, this study contributes to lay the foundations for the development of an emerging field of research of great relevance for the future of higher education. The EPGAI-ES Scale not only makes it possible to evaluate the current state of the integration of GAI in university contexts, but can also guide its evolution towards pedagogically grounded and ethically responsible practices.

Declaration of Conflicting Interests

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The authors received no financial support for the research, authorship, and/or publication of this article.

References

Alpizar-Garrido, L.O., & Martínez-Ruiz, H. (2024). Perspectiva de estudiantes de nivel medio superior respecto al uso de la inteligencia artificial generativa en su aprendizaje. RIDE. Revista Iberoamericana para la Investigación y el Desarrollo Educativo, 14(28). https://doi.org/10.23913/ride.v14i28.1830

Alvira, F. (2011). La encuesta: una perspectiva general metodológica. Centro de Investigaciones Sociológicas.

Andión-Gamboa, M., & Cárdenas-Presa, D. (2023). Convivir con inteligencias artificiales en la educación superior. Perfiles Educativos, 45, 56-69. https://doi.org/10.22201/iisue.24486167e.2023.Especial.61691

Aparicio-Gómez, W.O. (2023). La Inteligencia Artificial y su Incidencia en la Educación: Transformando el Aprendizaje para el Siglo XXI. Revista Internacional de Pedagogía e Innovación Educativa, 3(2), 217-229. https://doi.org/10.51660/ripie.v3i2.133

Ayuso-del-Puerto, D., & Gutiérrez-Esteban, P. (2022). La Inteligencia Artificial como Recurso Educativo durante la Formación Inicial del Profesorado. RIED-Revista Iberoamericana de Educación a Distancia, 25(2), 347-362. https://doi.org/10.5944/ried.25.2.32332

Azofra, M.J. (2000). Cuestionarios. Centro de Investigaciones Sociológicas.

Bethencourt-Aguilar, A., Castellanos-Nieves, D., Sosa-Alonso, J., & Area-Moreira, M. (2023). Use of Generative Adversarial Networks (GANs) in Educational Technology Research. Journal of New Approaches in Educational Research, 12(1), 153-170. https://doi.org/10.7821/naer.2023.1.1231

Buendía, L., Colás, M.P., & Hernández, F. (1994). Métodos de investigación en psicopedagogía. McGraw-Hill.

Byrne, B.M. (2016). Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming (3rd ed.). Routledge.

Cabero, J., & Llorente, M.C. (2013). La aplicación del juicio de experto como técnica de evaluación de las tecnologías de la información y la comunicación (TIC). Revista Eduweb, 7(2), 11–22. Available at: https://revistaeduweb.org/index.php/eduweb/article/view/206

Cascales-Martínez, A., López-Ros, S.P., & Gomariz-Vicente, M.A. (2024). Entre aulas y algoritmos: validación de un cuestionario sobre la perspectiva docente ante la Inteligencia Artificial Generativa. In Satorre-Cuerda,R. (Ed.), La docencia universitaria en tiempos de IA (15-27). Octaedro. Available at: https://rua.ua.es/dspace/bitstream/10045/149111/1/Principios%20para%20la%20secuencia.pdf

Colorado, J.R., Romero, M., Salazar, M., Cabrera, G., Castillo, V.R. (2024). Análisis comparativo de los coeficientes alfa de Cronbach, Omega de McDonald y alfa ordinal en la validación de cuestionarios. Revista Científica y Académica, 4(4), 2738-2755. Available at:
https://estudiosyperspectivas.org/index.php/EstudiosyPerspectivas/article/view/836/1343

Cordón, O. (2023). Inteligencia Artificial en Educación Superior: Oportunidades y Riesgos. RiiTE Revista Interuniversitaria de Investigación en Tecnología Educativa, 15, 16-27. https://doi.org/10.6018/riite.591581

Cornejo-Plaza, I., & Cippitani, R. (2023). Consideraciones éticas y jurídicas de la IA en Educación Superior: Desafíos y Perspectivas. Revista de Educación y Derecho, 28. https://doi.org/10.1344/REYD2023.28.43935

Cotrina-Aliaga, J.C., Vera-Flores, M.Á., Ortiz-Cotrina, W.C., & Sosa-Celi, P. (2021). Uso de la Inteligencia Artificial (IA) como estrategia en la educación superior. Revista Iberoamericana de la Educación, 1. https://doi.org/10.31876/ie.vi.81

Creswell, J.W., & Creswell, D.J. (2018). Research Design. Qualitative, Quantitative, and Mixed Methods Approaches. SAGE.

Dempere, J., Modugu, K., Hesham, A., & Ramasamy, L.K. (2023). The impact of ChatGPT on higher education. Frontiers en Education, 8, 1-13. https://doi.org/10.3389/feduc.2023.1206936

Denecke, K., Glauser, R., & Reichenpfader, D. (2023). Evaluación del potencial y los riesgos de las herramientas basadas en IA en la educación superior: Resultados de una encuesta electrónica y un análisis FODA. Tendencias en la Educación Superior, 2(4), 667-688.
https://doi.org/10.3390/higheredu2040039

Dúo-Terrón, P., Moreno-Guerrero, A.J., López-Belmonte, J., & Marín-Marín, J.A. (2023). Inteligencia Artificial y Machine Learning como recurso educativo desde la perspectiva de docentes en distintas etapas educativas no universitarias. RiiTE Revista Interuniversitaria de Investigación en Tecnología Educativa, 15, 58-78. https://doi.org/10.6018/riite.579611

Escobar, J., & Cuervo, Á. (2008). Validez de contenido y juicio de expertos: una aproximación a su utilización. Avances en Medición, 6(1), 27-36. Available at: https://gc.scalahed.com/recursos/files/r161r/w25645w/Juicio_de_expertos_u4.pdf

Escobedo, M.T., Hernández, J.A., Estebané, V., & Martínez, G. (2016). Modelos de ecuaciones estructurales: Características, fases, construcción, aplicación y resultados. Ciencia y Trabajo, 18(55), 16-22. https://doi.org/10.4067/S0718-24492016000100004

Flores-Vivar, J.M., & García-Peñalvo, F.J. (2023). Reflexiones sobre la ética, potencialidades y retos de la Inteligencia Artificial en el marco de la Educación de Calidad (ODS4). Comunicar, 74, 37-47. https://doi.org/10.3916/C74-2023-03

Gallent-Torres, C., Zapata-González, A., & Ortego-Hernando, J.L. (2023). El impacto de la inteligencia artificial generativa en educación superior: una mirada desde la ética y la integridad académica. Relieve, 29(2), M5. https://doi.org/10.30827/relieve.v29i2.29134

García-Martínez, I., Fernández-Batanero, J.M., Fernández-Cerero, J., & León, S.P. (2023). Analysing the Impact of Artificial Intelligence and Computational Sciences on Student Performance: Systematic Review and Meta-analysis. Journal of New Approaches in Educational Research, 12(1), 171-197. https://doi.org/10.7821/naer.2023.1.1240

García-Peñalvo, F.J., Llorens-Largo, F., & Vidal, J. (2024). La nueva realidad de la educación ante los avances de la inteligencia artificial generativa. RIED-Revista Iberoamericana de Educación a Distancia, 27(1), 9‑39. https://doi.org/10.5944/ried.27.1.37716

Hair, J.F., Anderson, R.E., Tatham, R.L., & Black, W.C. (1999). Análisis multivariante. Pearson Prentice Hall.

Jiménez-Martínez, K.A., Gamboa-Rodríguez, P.G., Vaughan-Bernal, M., & Moreno-Toledo, R.A. (2024). Validación de un cuestionario diagnóstico sobre la integración de la inteligencia artificial generativa en la práctica docente. Revista Digital de Tecnologías Informáticas y Sistemas, 8(1), 217-225.
https://doi.org/10.61530/redtis.vol8.n1.2024

Jiménez-Perona, M.I. (s.f.). Impact evaluation, pedagogical implications and attitudes derived from the use of generative artificial intelligence applications in teaching-learning processes in higher education. [Unpublished doctoral dissertation]. Universidad de Málaga.

Jin, Y., Yan, L., Echeverria, V., Gašević, D., & Martinez-Maldonado, R. (2025). IA generativa en la educación superior: Una perspectiva global de políticas y directrices de adopción institucional. Computers and Education: Artificial Intelligence, 8, 100348. https://doi.org/10.1016/j.caeai.2024.100348

Lievens, J. (2023). Artificial Intelligence (AI) in higher education: tool or trickery? Education and New Developments, 2, 645-647.

López-Roldán, P., & Fachelli, S. (2015). Metodología de la investigación social cuantitativa. Universitat Autònoma de Barcelona.

Moral-de-la-Rubia, J. (2016). Análisis factorial y su aplicación al desarrollo de escalas. In Landero, R., & González, M.T. (Eds.), Estadística con SPSS y metodología de la investigación (387-443). Trillas.

Morán-Ortega, S.A., Ruiz-Tirado, S.G., Simental-López, L.M., & Tirado-López, A.B. (2024). Barreras de la Inteligencia Artificial generativa en estudiantes de educación superior. Percepción docente. Revista de Investigación en Tecnologías de la Información, 12(25), 26-37. https://doi.org/10.36825/RITI.12.25.003

Narváez, R., & Medina-Gual, L. (2024). Validación de un cuestionario para explorar el uso de la IA en estudiantes de educación superior. Revista Paraguaya de Educación a Distancia, FACEN-UNA, 5(4), 29-40. https://doi.org/10.56152/reped2024-dossierIA2-art4

Obenza, B.N., Salvahan, A., Rios, A.N., Solo, A., Alburo, R.A., & Gabila, R.J. (2024). University students’ perception and use of ChatGPT: Generative artificial intelligence (AI) in higher education. International Journal of Human Computing Studies, 5(12), 5-18. Available at: https://ssrn.com/abstract=4724968

Perezchica-Vega, J.E., Sepúlveda-Rodríguez, J.A., & Román-Méndez, A.D. (2024). Inteligencia artificial generativa en la educación superior: usos y opiniones de los profesores. European Public & Social Innovation Review, 9, 1-20. https://doi.org/10.31637/epsir-2024-593

Prajapati, J.B., Kumar, A., Singh, S., Prajapati, B., Thakar, Y., Tambe, P.T. et al. (2024). Artificial intelligence-assisted generative pretrained transformers for applications of ChatGPT in higher education among graduates. SN Social Sciences, 4, 19. https://doi.org/10.1007/s43545-023-00818-0

Rodríguez-Armero, R., Chorot, P., & Sandín, B. (2023). Construcción y validación preliminar de la Escala de Respuestas de Asco [Construction and preliminary validation of the Disgust Response Scale]. Revista de Psicopatología y Psicología Clínica, 28(2), 107-120. https://doi.org/10.5944/rppc.34553

Romero-Rodríguez, J., Ramírez-Montoya, M., Buenestado-Fernández, M., & Lara-Lara, F. (2023). Use of ChatGPT at University as a Tool for Complex Thinking: Students’ Perceived Usefulness. Journal of New Approaches in Educational Research, 12(2), 323-339. https://doi.org/10.7821/naer.2023.7.1458

Sanabria-Navarro, J.R., Silveira-Pérez, Y., Pérez-Bravo, D.D., & Cortina-Núñez, M.J. (2023). Incidencias de la inteligencia artificial en la educación contemporánea. Comunicar, 77(XXXI), 97-107. https://doi.org/10.3916/C77-2023-08

Sánchez-Mendiola, M., & Carbajal-Degante, E. (2023). La inteligencia artificial generativa y la educación universitaria. Perfiles Educativos, 45, 70-86.
https://doi.org/10.22201/iisue.24486167e.2023.Especial.61692

Sánchez-Prieto, J.C., Izquierdo-Álvarez, V., Moral-Marcos, M.T.D., & Martínez-Abad, F. (2024). Inteligencia artificial generativa para autoaprendizaje en educación superior: Diseño y validación de una máquina de ejemplos. RIED-Revista Iberoamericana de Educación a Distancia, 28(1), 59-81. https://doi.org/10.5944/ried.28.1.41548

Sánchez-Vera, M.M. (2023). La inteligencia artificial como recurso docente: usos y posibilidades para el profesorado. EDUCAR, 60(1), 33-47. https://doi.org/10.5565/rev/educar.1810

Saz-Pérez, F., & Pizá-Mir, B. (2024). Autopercepción del docente español sobre la posible integración de herramientas de inteligencia artificial generativa en el aula mediante un cuestionario TPack. In Díez, M., Martínez, S., Bogdan, R., Dies, M.E., Ramírez, A., Jiménez, R. et al. (Coords.), Sobre la educación científica y el cuidado de la casa común: necesidades y perspectivas (30‑41). Dykinson.

Saz-Pérez, F., Pizà-Mir, B., & Carrió, A.L. (2024). Validación y estructura factorial de un cuestionario TPACK en el contexto de Inteligencia Artificial Generativa (IAG). Hachetetepé. Revista Científica de Educación y Comunicación, 28, 1-14. https://doi.org/10.25267/Hachetetepe.2024.i28.1101

Shaengchart, Y. (2023). A Conceptual Review of TAM and ChatGPT Usage Intentions Among Higher Education Students. Advance Knowledge for Executives, 2(3), 1-7. Available at: https://ssrn.com/abstract=4581231

Tójar, J.C. (2001). Planificar la investigación educativa: una propuesta integrada. Fundec.

Trujillo, F., Pozo, M., & Suntaxi, G. (2025). Artificial intelligence in education: A systematic literature review of machine learning approaches in student career prediction. Journal of Technology and Science Education, 15(1), 162-185. https://doi.org/10.3926/jotse.3124

Turra, Y., Villagra, C.P., Mellado, M.E., & Aravena, O.A. (2022). Diseño y validación de una escala de percepción de los estudiantes sobre la cultura de evaluación como aprendizaje. Relieve, 28(2). https://doi.org/10.30827/relieve.v28i2.25195

UNESCO (2019). The Sustainable Development Goals Report. Organización de las Naciones Unidas para la Educación, la Ciencia y la Cultura. Available at: https://bit.ly/34nbq60

UNESCO (2022). Recomendación sobre la ética de la inteligencia artificial. Organización de las Naciones Unidas para la Educación, la Ciencia y la Cultura. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000381137_spa

Valencia, G.E., Barragán, R.D.L., Ledesma, S.C., & Moraima, P. (2024). Impacto de la inteligencia artificial generativa en la creatividad de los estudiantes universitarios. Technology Rain Journal, 3(1), e33. https://doi.org/10.55204/trj.v3i1.e33

Vera, F. (2023). Integración de la Inteligencia Artificial en la Educación superior: Desafíos y oportunidades. Revista Electrónica Transformar, 4(1), 17-34. Available at: https://www.revistatransformar.cl/index.php/transformar/article/view/84

Wang, T., & Cheng, E.C.K. (2021). An investigation of barriers to Hong Kong K-12 schools incorporating Artificial Intelligence in education. Computers and Education: Artificial Intelligence, 2, 100031. https://doi.org/10.1016/j.caeai.2021.100031

Annex

Scale for the Analysis of Perceptions of GPI in Higher Education (EPGAI-ES)

This questionnaire has been carried out with the aim of finding out the first-person perception of Higher Education students about the use of applications based on Generative Artificial Intelligence (GAI).

GAI is a branch of Artificial Intelligence that is based on language models capable of generating text, images, music, voices, etc.

This questionnaire is voluntary, and therefore you are asked to participate without any personal interest.

We would appreciate it if you would be as honest as possible.

In this sense, the information collected here, including personal data is safeguarded by current legislation and by the ethical commitment of the study team: The information collected here will be used only for information purposes by the institutions that manage the centers and for scientific purposes by the authors of this questionnaire.

Thank you for your collaboration

Individual Variables

1. Age

2. Gender

3. University where you study

4. What are you studying?

5. What grade are you in?

6. I have received some kind of training aimed at improving my digital skills.

No training

 

A lot of training

7. My technological skills are adequate for me to deal effectively with digital applications in my studies.

Very low

 

Very high

Perceptions of Knowledge and Attitudes About Generative Artificial Intelligence

8. I am familiar with the concept of GAI.

Nothing

 

A lot

9. I have a high level of knowledge about how the GAI works.

Strongly disagree

 

Strongly agree

10. I am competent to use GAI tools in my academic tasks.

Strongly disagree

 

Strongly agree

11. GAI is a technology that can improve the quality of my own learning.

Strongly disagree

 

Strongly agree

12. Knowing how to use the GAI will allow me to have more job opportunities.

Strongly disagree

 

Strongly agree

13. Knowing how to use the GAI is important for my professional development.

Strongly disagree

 

Strongly agree

14. GAI is a technology that can change my learning experience.

Strongly disagree

 

Strongly agree

15. GAI currently has an influence on university education.

Strongly disagree

 

Strongly agree

16. I am concerned that the information provided by GAI is not reliable.

Strongly disagree

 

Strongly agree

17. I am interested in learning more about the GAI.

Strongly disagree

 

Strongly agree

18. The pedagogical implications of the use of GPI should be addressed in the classroom.

Strongly disagree

 

Strongly agree

19. I participate or have participated in training activities at GAI.

Strongly disagree

 

Strongly agree

Benefits and Opportunities of Generative Artificial Intelligence

20. GAI can help me to develop my creativity.

Strongly disagree

 

Strongly agree

21. The use of the GAI allows me to perform academic tasks in an innovative way.

Strongly disagree

 

Strongly agree

22. GAI can help me to be agile and efficient in carrying out academic tasks.

Strongly disagree

 

Strongly agree

23. GAI can help personalise my learning effectively.

Strongly disagree

 

Strongly agree

24. GAI can facilitate the rapid and efficient design of educational materials.

Strongly disagree

 

Strongly agree

25. GAI can help me access relevant information in my personal learning construction process.

Strongly disagree

 

Strongly agree

Challenges of Generative Artificial Intelligence

26. GAI generates dependence on technology.

Strongly disagree

 

Strongly agree

27. Excessive use of GAI can affect my ability to solve problems and tasks autonomously.

Strongly disagree

 

Strongly agree

28. GAI can influence the degree of interaction in the training process.

Strongly disagree

 

Strongly agree

29. GAI can lead to superficial learning.

Strongly disagree

 

Strongly agree

30. GAI can interfere with the development of critical thinking and creativity.

Strongly disagree

 

Strongly agree

31. There is resistance to the use of GPI in the educational context.

Strongly disagree

 

Strongly agree

32. The university teaching staff is trained or qualified in GAI in the university context to favour the educational experience of the students.

Strongly disagree

 

Strongly agree

Use of Generative Artificial Intelligence

33. I tend to use GAI tools to carry out my activities and tasks at the University.

Strongly disagree

 

Strongly agree

34. GAI can provide information and resources tailored to my interests.

Strongly disagree

 

Strongly agree

35. GAI provides immediate feedback to improve my learning.

Strongly disagree

 

Strongly agree

36. GAI can help me to optimize time and allow me to focus on more meaningful aspects of learning.

Strongly disagree

 

Strongly agree

37. One of the uses I make of the GAI has to do with the production of content (textual, infographic, image, etc.).

Strongly disagree

 

Strongly agree

38. One of the uses I make of the GAI has to do with synthesising information.

Strongly disagree

 

Strongly agree

39. One of the uses I make of the GAI has to do with paraphrasing or rephrasing the wording of texts.

Strongly disagree

 

Strongly agree

40. One of the uses I make of the GAI has to do with improving the quality of a production (text, image, etc.) that has already been produced.

Strongly disagree

 

Strongly agree

Ethical Issues Related to Generative Artificial Intelligence

41. I am concerned  about  the security  of  my  personal  data on GAI systems.

Strongly disagree

 

Strongly agree

42. I am concerned about the biased information that may be provided by the GAI.

Strongly disagree

 

Strongly agree

43. I am concerned that the GAI may perpetuate existing stereotypes and prejudices in society.

Strongly disagree

 

Strongly agree

44. GAI-based tools are transparent in their processes.

Strongly disagree

 

Strongly agree

45. The use of GAI in university education should be regulated.

Strongly disagree

 

Strongly agree

46. Accountability and oversight mechanisms need to be put in place to prevent abuses and ensure ethical practices in the use of GAI.

 

Strongly disagree

 

Strongly agree

47. I am concerned about the impact of the GAI on job losses, digital exclusion or homogenisation of thinking.

Strongly disagree

 

Strongly agree

Type of Generative Artificial Intelligence Applications

48. Indicate the GAI applications you use with text-related aspects.

  • I do not usually use any. 

  • Chat GPT. 

  • Copilot. 

  • Quillbot. 

  • Claude. 

  • Gemini. 

  • Another 

49. Indicate the GAI applications you use with image-related aspects.

  • I do not usually use any. 

  • Grapht GPT. 

  • Dall-E 2. 

  • Visual Chat GPT. 

  • Picsart. 

  • YouCam AI Pro. 

  • Another 

50. Indicate the GAI applications that you use with audio and sound related aspects.

  • I do not usually use any. 

  • Sonix. 

  • NVIDIA Jarvis. 

  • Adobe premiere Pro. 

  • Lumen 5. 

  • Aufónico. 

  • Another 

51. Indicate the GAI applications you use with video-related aspects.

  • I do not usually use any. 

  • Pictoria. 

  • Síntesis. 

  • Holagen. 

  • Colosenses. 

  • Fliki. 

  • Another 

52. Indicate the GAI applications you use with aspects related to dataset analysis (text, data, audio, etc.).

  • I do not usually use any. 

  • Julio ai. 

  • Microsoft Power BI. 

  • Polímero. 

  • Akkio. 

  • MonoAprende. 

  • Another 

53. Indicate the GAI applications you use with language translation aspects.

  • I do not usually use any. 

  • ChatGPT. 

  • Deepl. 

  • Bing Translator. 

  • Google Translate. 

  • TextCortex. 

  • Another 

54. If you use other GAI applications for other uses, please indicate which ones:

 




Licencia de Creative Commons 

This work is licensed under a Creative Commons Attribution 4.0 International License

Journal of Technology and Science Education, 2011-2026

Online ISSN: 2013-6374; Print ISSN: 2014-5349; DL: B-2000-2012

Publisher: OmniaScience