Validasi psikometrik skala literasi ChatGPT: analisis CFA dan EFA dalam konteks bahasa indonesia

Abstract

Artificial Intelligence (AI) has become increasingly influential in education, with ChatGPT emerging as a prominent application of generative AI. This study aimed to validate the ChatGPT Literacy Scale developed by Seyoung Lee and Gain Park (2024), using both Confirmatory Factor Analysis (CFA) and Network Analysis. A total of 370 Indonesian participants aged 14–60 years completed the instrument, which measures five conceptual dimensions of ChatGPT literacy: Technical Proficiency, Critical Evaluation, Communication Proficiency, Creative Application, and Ethical Competence. Initial analysis using Exploratory Factor Analysis (EFA) revealed a two-factor structure, namely Analytic-Technical Literacy and Pragmatic Creative Engagement, which was further supported by CFA and network modeling. Reliability and construct validity were confirmed through Omega coefficients and centrality indices. The findings offer empirical support for a simplified and functionally meaningful structure of ChatGPT literacy, f This study aims to develop and validate the ChatGPT Literacy Scale, an instrument designed to assess individuals' literacy in using ChatGPT. The scale was constructed based on five theoretical dimensions (Lee & Park, 2024) and involved 370 participants from diverse backgrounds. Item validity analysis showed that all items had item-total correlations ≥ 0.58, well above the minimum threshold of 0.30. Confirmatory Factor Analysis (CFA) on the five-dimensional theoretical model demonstrated an acceptable model fit (RMSEA = 0.065, CFI = 0.910, NNFI = 0.899). To explore alternative structures better aligned with empirical data, Exploratory Factor Analysis (EFA) was conducted, revealing a six-factor model that more accurately captured the multidimensionality of ChatGPT literacy. This six-factor structure was further confirmed through CFA, yielding superior model fit indices (RMSEA = 0.063, CFI = 0.917, NNFI = 0.908). Model comparison indicated that the six-factor solution provided the most comprehensive representation of the construct, despite a marginal difference in AIC values compared to the five-factor model. Reliability analysis of the six-factor model showed high internal consistency, with Cronbach’s alpha ranging from 0.833 to 0.934 across dimensions and an overall Omega coefficient of 0.971. These results suggest that the developed scale possesses strong construct validity and high reliability, making it a psychometrically sound tool for measuring ChatGPT literacy in both academic and practical contexts.