AI literacy to the fore

Brisbane, Australia. Photo by Mark Pegrum, 2025. May be reused under CC BY 4.0 licence.

XXIIIrd International CALL Conference
Brisbane, Australia
3-5 October, 2025

As expected, the International CALL Conference had a strong focus on integrating generative AI effectively into education, entailing the need for both educators and students to develop their AI literacy. Given the conference theme of Inclusive CALL, many papers also discussed the ambivalent role played by genAI in respect of diversity, equity and inclusion – and how best to use it to serve the purpose of inclusion.

In his keynote presentation on the first day, Ethical and emotional responses to AI integration in language education: Insights from a qualitative case study and a Q methodology investigation, Lawence Jun Zhang referred to Son’s (2023) comment that AI in language education is not a singular tool but a broad spectrum of technologies, and suggested that teachers need to take on different roles with respect to AI. He reported on a PRISMA review of teachers’ use of AI. Under AI literacy, researchers identified the following themes: understanding AI; AI application in pedagogical practices; prompting and interaction with LLMs; AI ethics and concerns; challenges and barriers; and professional development and training needs. He reported on a second study looking at teachers’ emotional reactions to AI. The themes to emerge were: positive emotions facilitating adaptation; AI-induced negative emotions like anxiety, stress and cognitive overload; emotional regulation and coping strategies, including adaptive and avoidance coping; ambivalence and mixed emotional responses, highlighting the coexistence of optimism and apprehension; and institutional and pedagogical implications, emphasising the influence of institutional structures.

In my own keynote, Where do humans fit in? CALL, AI and human agency, which opened the second day, I argued that the balance between human agency and AI agency is one of the most important issues in contemporary education, and is closely linked to questions of diversity, equity and inclusion. I began by taking a step back to observe the big picture of our contemporary era, viewing it through three intersecting frames – the postdigital, the posthuman and the sociomaterial – and asking how these might help to inform our understandings of human and AI agency. I went on to consider humanity’s two major encounters to date with AI, their impact on the so-called personalisation of both our networks and our learning, and the implications of such personalisation in terms of personal agency. I argued that personalisation may not in fact be a suitable term or concept to describe or capture these developments, with individualisation or customisation being preferable alternatives in many cases. My main focus was on three possible strategies that we as language educators can promote with, and for, our students: developing AI literacy to build and bolster human agency; promoting co-intelligence to balance human agency and AI agency; and practising design justice to develop human agency for the marginalised. Adopting such strategies allows us to capitalise on the benefits of AI while tempering its drawbacks, as we work to build human agency and promote diversity, equity and inclusion.

In their pre-conference workshop, Beyond access: Designing inclusive learning with AI and VR/AR tools for language learners, Wen Yun and Yuju Lan covered both AI and XR tools and their role in inclusive language learning. Yuju Lan explained that a pedagogical AI agent should be a proxy of a human teacher, who remains responsible for the overall design. She described teacher education classes where teachers develop pedagogical agents for their own contexts, including the possibility of agents embodied as robots. She also showed examples of AI agents embedded in VR environments. Ultimately, it is important that students can transfer the skills they are practising with AI and in VR environments to real-world settings. Wen Yun described a vocabulary learning tool called ARCHe for Singaporean Primary 2 students. In the newer ARCHe 2.0, students write a text to describe a picture, with the AI then generating a pictorial version of their text as a comparison. Students can further modify their description until the AI-generated image better matches their writing. AI literacy is a byproduct of this approach. She went on to describe a goal-driven conversational agent designed to improve the writing of Primary 4 students; this focuses on shared agency between the AI and the students, with the AI supporting students’ goal-oriented activities. There is a study currently underway to determine the efficacy of this approach.

In her paper, Bridging the AI divide: Exploring AI literacy among migrant and refugee adult learners in Australia, Katrina Tour stressed the growing importance of AI for everyday learning and life, hence the need to develop language students’ AI literacy. She went on to introduce the Sociocultural Model of AI Literacy by Tour, Pegrum & McDonald (2025). Reporting on a national survey of learners in a language education programme, she noted that the majority, as of late 2023–early 2024, had not even heard of, let alone used, generative AI: this is a group which is very much in need of learning about AI and developing AI literacy. Among those who used genAI, some used it for learning (e.g., supporting homework using multiple languages in a translanguaging paradigm), everyday tasks (e.g., finding recipes), leisure pursuits (e.g., scripting vlog episodes), and professional activities (e.g., creating marketing materials). However, capabilities were fragmented, with a lack of understanding of how genAI works and the need to apply a critical lens to genAI output. She concluded that there is a growing AI divide; while some learners are experimenting with basic operational and contextual AI, advanced critical and creative skills needed targeted support. AI literacy interventions and AI literacy programmes are thus needed for language learners.

In their paper, Enhancing EFL students’ human-centred AI literacy and English motivation via ChatGPT co-writing, Hsin-Yi Huang, Chiung-Jung Tseng and Ming-Fen Lo described an advanced writing course where students took a translanguaging approach to strategically using ChatGPT for writing support. They mentioned the importance of students developing critical and creative skills to apply to all generative AI use. Teachers have an important role, they suggested, in guiding students to embrace human-centred values and creativity.

In their paper, Beyond standard English: AI-driven language learning for inclusivity in South Korea, So-Yeun Ahn, Han Jieun, Park Junyeong and Kim Suin demonstrated an AI chatbot capable of conversing in multiple World Englishes and responding to students’ comments and questions in a conversational format. They reported on a study with 200 Korean learners, where one group could access multiple World Englishes and the other was exposed only to Inner Circle Englishes. Initially students indicated more satisfaction with Inner Circle Englishes, but the World Englishes group tended to produce more complex utterances to elaborate about their personal experiences. It was found that Inner Circle accents reinforced traditional curricular topics, while World Englishes accents elicited more diverse, identity-linked or cultural topics. World Englishes changed how students talked; AI wasn’t only a language partner but also a cultural persona, with students positioning the AI as representative of a national identity. This does however raise ethical and pedagogical questions about how AI voices should represent national identities.

In their presentation, Feedback in the generative AI era: A critical review of generative AI for L2 written feedback, Peter Crosthwaite and Sun Shuyi (Amelia) described a PRISMA-informed scoping review of 51 empirical studies of genAI feedback on L2 writing. The most common aim of such research was to identify an impact from ChatGPT on improving writing; in second place was reporting student/teacher perceptions of genAI feedback; third was comparing genAI to teacher-generated feedback. Most of the studies looked at post-writing feedback, rather than feedback at the planning or early stages of writing; this is an area for future investigation. Many studies showed significant gains, with teacher and AI feedback being complementary, and a combination of teacher and AI feedback being more effective than either alone. One methodological problem identified was that assessments of writing quality were not necessarily related to the type of genAI feedback; another was small sample sizes; yet another was that many studies did not provide information on the genAI prompts used. Overall, it would seem that there are gains from genAI L2 writing feedback, but the field is still developing.

In their paper, TPACK co-construction of pre- and in-service EFL teachers toward the integration of corpus and AI, Oktavianti Ikmi Nur and Qing Ma described the Indonesian school teaching context where internet access is quite varied, and where pre-service teachers are often tech-savvy but lack pedagogical expertise, while in-service teachers are strong in pedagogy but lack digital skills. They described a study looking at how pre-service and in-service teachers co-construct TPACK with respect to corpus linguistics and AI. It was suggested that AI tools are now becoming normalised in teaching, while there is less familiarity with corpus tools. Collaboration between pre-service and in-service teachers is an important mechanism for bridging technology gaps in teacher repertoires.

In their presentation, A large-scale mixed-methods study of Japanese university students’ use of ChatGPT for L2 learning, Louise Ohashi and Suwako Uehara reported on a July 2024 survey of 2,521 Japanese university students looking at whether and how they were using ChatGPT. It was found that 50.1% were using it for language learning, with a focus on writing, grammar and vocabulary. They reported on the top six themes to emerge: feedback on own work; translation; vocabulary; grammar; conversation practice; paraphrase or summarise. Around 2% of responses indicated academic integrity issues ranging from obvious cheating to suspected misuse. They concluded that students were typically using ChatGPT for basic support, with deeper exploration limited; generative AI literacy is needed.

In their paper, Fostering inclusivity through the incorporation of e-readers in online Mandarin classrooms, Liu Chuan and Jing Huang suggested that e-readers can be powerful tools for inclusion, autonomy and plurilingualism; can promote student agency and choice; and can support equitable, responsible and identity-affirming teaching practices. They reported on critical dialogue between teachers which led to reflective, transformative practice.

In his paper, The effects of a personalised learning plan in a language MOOC on learners’ oral presentation skills, Naptat Jitpaisarnwattana quoted Downes’ (2016) distinction between personalised learning, which involves tailoring learning to individual needs, and personal learning, which emphasises learner autonomy and self-directed engagement. Considerations include technology infrastructure, learner traits, learner culture, and the nature of the subject matter. He reported on a study of personalisation in MOOCs which found that many students did not follow personalised pathways to the end, with those who deviated from the prescribed pathways showing a greater preference for more autonomous approaches to their learning. He then went on to describe a more recent study looking at 178 science major students in an Academic English course in Thailand, where a personalised learning plan was generated for each student, with course analytics used to track whether students followed their plan. Significant improvements were found across a range of language areas and skills. He concluded that personalisation is often associated with adaptive learning, where students are offered adaptive recommendations (a strong form of personalisation), while here the personalised learning plan provided students with a possible pathway tailored to their profiles and self-identified needs, leaving students some room for autonomy (a weak form of personalisation). This weaker form of personalisation may a valuable feature in an era increasingly dominated by algorithm-driven edtech.

It’s clear that, while we are still in the early days of generative AI adoption in education, important understandings are already beginning to emerge concerning the need for both educators and students to develop AI literacy, and the need to take a nuanced and judicious approach to using genAI if it is to support diversity, equity and inclusion.

Gen AI takes front stage

Hoan Kiem Lake, Hanoi, Vietnam

Hoan Kiem Lake, Hanoi, Vietnam. Photo by Mark Pegrum, 2024. May be reused under CC BY 4.0 licence.

GloCALL Conference
Hanoi, Vietnam
22-24 August, 2024

Unsurprisingly, the 2024 GloCALL Conference in Hanoi was dominated by discussions and debates about generative AI, as educators and educational institutions seek to come to terms with its uses and challenges. While there was a general acknowledgement that genAI is having and will continue to have a major impact on education, numerous speakers sounded notes of caution about the need to keep its novelty and power in perspective, to use it in line with established pedagogical principles, and to be wary of its pitfalls.

In his opening keynote, The transformative role of technology in language teaching and learning: Seeing through the hype, Glenn Stockwell began by noting that digital technologies change how and what we learn, and even our goals for learning, but they may sometimes sit beneath the surface of our awareness. He spoke about the rapid development of generative AI, and indicated that there are sometimes differences in teacher and student perspectives – teachers may worry that students will cheat, while students worry that they will be accused of cheating. Educational institutions are currently sending mixed messages about genAI; clear guidelines for usage are needed. He mentioned the AI paradox: the idea that teachers are using AI to create tasks that students are using AI to complete. He also mentioned Bryson and Hand’s 2008 notion of false engagement: students may engage in tasks simply for the sake of completing them, if they don’t fully understand their pedagogical purposes.

Although genAI brings enormous changes, we can learn a lot from discussions of the arrival of past educational technologies, and should remember that good pedagogical practices must remain the core of what we do. Among its possible pedagogical uses, genAI can be used as a writing assistant, or as a chatbot to provide non-judgemental feedback. However, he suggested that the relationship between technology and autonomy is tenuous; it is doubtful whether genAI promotes learner autonomy, though it is certainly used by autonomous learners. It is definitely necessary to rethink assessment, and ensure it is preparing students appropriately for today’s world. We should consider assessing both the process and the product.

Current research on AI in language education is still largely focused on perceptions and impressions, but we need research on actual practices. Teachers, meanwhile, have a sense of precarity with technology (Stockwell & Wang, 2024), with concerns over job security, funding cuts, workload, and training. This raises the issue of digital wellbeing in an AI era (Bentley et al., 2024). Legal and ethical issues also loom large. Much more discussion is needed of these issues, locally and globally.

In his plenary, Artificial intelligence in the language education context of Vietnam: From theory to practice, Nguyen Ngoc Vu traced the history of the development of neural networks from the early 1980s onwards, explaining the increasing parameters as GPT was developed. He referred to the biological theory of emergent properties (Saltzer & Kaashoek, 2009), that is, properties not present in the individual components of a system, but which show up when those components are combined. He argued that large language models (LLMs) have the potential to dramatically impact the landscape of education. One issue with the production of certain materials, such as videos, is that AI-made materials may seem too perfect in comparison with human-made materials.

He demonstrated the TARI AI Tools developed by the Training and Applied Research Institute (TARI) at the Ho Chi Minh City University of Foreign Languages and Information Technology (HUFLIT). These include chatbots for general university inquiries; teaching assistant chatbots with domain-specific knowledge; and healthcare chatbots that can draw information from reputable health information sources. He then went on to demonstrate tools designed to improve the teaching of linguistics, such as tools to parse or analyse texts according to particular linguistic frameworks. Students who have tried these tools have reacted positively but have stressed some ethical issues: the need for informed consent, anonymisation, legal/copyright compliance, review by ethics boards, and transparency and training.

In his presentation, The impact and perception of using an AI writing platform to improve narrative essay writing performance, Wang Yi (with his supervisor, Kean Wah Lee, as a co-author) described an AI narrative writing prototype tool called ‘Tale-It’, where students answer set questions step-by-step, describing the opening scene, setup, inciting incident, and so on. Students also have the option to obtain suggestions for improving their expression in terms of vocabulary or grammar. Students are able to compare their own original stories and the AI-supported stories side-by-side. In a study to examine the effect of the AI tool on improving students’ narrative writing, a significant pre-test to post-test improvement was found. In a study of student perceptions involving a survey enriched by interviews, all participants agreed that the tool improved their writing, two-thirds that it improved their confidence, and the majority that it helped facilitate their understanding of narrative structure and increased their creativity. Ultimately it improved not only their writing performance, but their understanding of genre structure, creativity, confidence in their storytelling abilities, their expression and grammar.

In his presentation, Generative AI-powered critical reading in academic contexts: An exploratory study, Haoming Lin listed some affordances of genAI technology: contextual understanding, coherent responses, reinforcement learning by human feedback, and a multilingual environment. He described three levels of reading comprehension: literal meaning, interpretative/inferential meaning, and evaluative/critical meaning. He reported on a pilot study in China examining which dimensions of critical reading postgraduate students found most and least supported by ChatGPT, and what the best and worst aspects were of using ChatGPT for critical reading support. Students were provided with readings and critical reading reports from a GPT-based Chrome extension app, Full Picture, which analyses papers according to overall reliability, reading time, three takeaways, content analysis, trustworthiness and bias check, and research topics. Students felt the app helped them to evaluate the arguments, evidence and generalisability of texts, but didn’t provide much help in comparing and contrasting the findings with others’ work, or evaluating how well theoretical frameworks were applied. At the literal level, it can provide sometimes irrelevant but new perspectives and allow quick comprehension; at the interpretative level it could be relevant to personal reading goals, inspire readers, and answer questions; and at the evaluative level it can align with personal beliefs in critical reading. Ultimately what is needed is a partnership with AI rather than relying on AI; development of AI literacy; and development of critical reading and writing together.

In my closing keynote of the conference, Not a(nother) revolution! Generative AI, language and literacy, I wrapped up our discussions of genAI by arguing that it will not revolutionise education (any more than any previous technologies have done) but that, used appropriately, it could help to support the evolution of education in areas where change may be needed. I began by looking at the technology itself and how it is developing and is likely to develop in the future; then I looked at the educational and assessment implications, and concluded that the future of study and work will belong to human-AI collaborations; and finally I looked at the societal and environmental implications, and stressed the need to maintain a critical perspective on genAI tools. Ultimately, all of us, educators and students, need to develop the AI literacy to ensure that genAI is being used appropriately and effectively to support the ongoing evolution of education.

In her presentation, Advancing TPACK: Unravelling contextual knowledge (XK) among Indonesian secondary school teachers, Ella Harendita (with her supervisors, Grace Oakley and myself, as co-authors) explored how teachers at various schools develop and employ their different levels of contextual knowledge. This XK influences their approaches to content (e.g., knowledge about students’ daily lives and values), pedagogy (e.g., knowledge about students’ learning preferences and levels of ability), and technology (e.g., knowledge about students’ technology access and interests). She concluded that teacher agency is a driving force in XK development; that teachers capitalise on the relational, collectivist culture of Indonesia to develop XK; that teachers engage in self-directed professional learning, for example on social media; and that classroom contexts are the major determining factors for teachers’ pedagogical decisions.

In his featured presentation, ‘I take it as a defeat if I work alone’: CALL, co-operation and professional development, Chau Meng Huat referred to Anne Burns’ statement that TESOL has only recently undergone a ‘collaborative turn’ in professional development and research. He spoke about key beliefs which can underpin successful collaborations, including positive interdependence (from the area of co-operative learning), abundance not scarcity, being more rather than having more, and kampung (community, village) spirit. He finished by quoting Betsy Rymes’ comment that we need to move from ‘applied’ to ‘collaborative’ linguistics. He suggested that issues of diversity, equity and inclusivity come to the fore in such collaborative approaches.

In his keynote, Integrating CALL to teach ESL and STEM: Interdisciplinary critical pedagogical approach, Kean Wah Lee described a design-based research project based on McKenney and Reeves’ 2019 framework, using four stages: analysis and exploration; design; implementation; and evaluation and reflection. Such a project, he said, has a collaborative, iterative nature focusing on practical application.

He stressed the importance of a project being tailored to its context, and he spent some time describing the issues for STEM learners in Malaysia, many of whom face discipline-specific language challenges. In addition, heterogeneous learners – some with interrupted schooling or trauma – need tailored English support. Work is underway on trying to shift teacher-centred approaches towards inquiry-based learning involving active participation and critical thinking. Educational strategies must focus on enhancing language proficiency alongside STEM content learning. Indeed, the integration of STEM and English language teaching has emerged as a global trend in recent years, to support student success in both areas, but there is still a need for more research in this area. He proposed an interdisciplinary multiple learning approach (bringing in blended learning, CLIL, PBL, IBL, and project-based learning) involving CALL/technology, which allows for multimodal teaching and innovative pedagogical practices. But this must also be a critical pedagogical approach, drawing on critical theory, and involving critical pedagogical competence, critical technological competence, and critical cross-cultural communicative competence.

He described a key project outcome, namely the Gene Detective e-Learning Module/Toolkit for STEM-EL; each ‘capsule’ in the digital platform involves a pre-test, a video, interactive activities, a virtual experiment and a case study, and is accompanied by hard copy activity books (these are extremely important in low bandwidth areas). The project is now in an evaluation stage. Students have reacted positively to the materials to date. Data collection is ongoing to improve the toolkit and make it culturally appropriate and relevant for classroom use. It is hoped that it can eventually be adapted for use in Malaysian school biology classrooms.

It was a conference full of informative presentations and rich discussions. It will be interesting to see how our discussions of technology in education – and especially genAI in education – have continued to evolve when we gather again at future GloCALL conferences.

Grappling with AI

Tsim Sha Tsui, Hong Kong. Photo by Mark Pegrum, 2024. May be reused under CC BY 4.0 licence.

2024 Q2 Update
Singapore & Hong Kong, SAR China
April-May, 2024

In April this year, I co-presented a workshop on AI literacy for schools: Principles, practices and problems for the Academy of Principals, Singapore (9 April; with Grace Oakley) and presented a seminar on Generative AI and the evolution of education for Hong Kong Baptist University (30 April). In addressing, firstly, an audience of schoolteachers and Ministry of Education staff in Singapore and, secondly, tertiary educators from across Hong Kong, it became clear that everyone, across countries and education levels, is grappling with similar challenges as we seek to balance the opportunities and risks for teaching and learning presented by generative AI.

In my own presentations, I began by zooming out to look at the big picture of the technology itself and how it has developed and is developing; continued by zooming in to look at the implications for education and assessment; zoomed out again to look at challenges from the pedagogical to the societal; and concluded by emphasising the need for both educators and students to acquire AI literacy.

Discussions during and after these sessions revealed that many educators are keen to explore how gen AI can support their students’ learning and help them develop skills they will need in future workplaces, but that there are pedgogical concerns over how to teach and assess in this era, and ethical concerns over issues ranging from privacy and surveillance through to the environmental impact.

And rightly so. As I argued in a podcast on Digital ethics for Hong Kong Baptist University (3 May), gen AI is a new, more powerful stage of technology development and therefore potentially more valuable and potentially more risky at the same time. The task before us is balancing out the value and the risks. This will keep educators very busy in years to come as we seek to develop our own AI literacy and that of our students, and, I hope, offer some public leadership in this area.

At the interface of AI and language learning

Melbourne Skyline from Southbank, Australia. Photo by Mark Pegrum, 2023. May be reused under CC BY 4.0 licence.

VicTESOL Symposium
Melbourne, Australia
13 October, 2023

I was invited to be a member of a panel on Generative AI in EAL learning: Promises and challenges at the VicTESOL Symposium held at the Victorian Academy of Teaching and Leadership in North Melbourne. Hosted by Melissa Barnes (La Trobe University) and Katrina Tour (Monash University), the other members of this 3-person panel were Shem Macdonald and Alexia Maddox (both from La Trobe University). Perhaps reflecting the degree of interest in this area, the panel ran twice, with different audiences.

We started off each time by considering the opportunities presented by generative AI in terms of language learning inside classrooms (explaining vocabulary or grammar points; acting as a concordancer to provide examples of language-in-use; improving language, register and style; creating self-study revision questions; collaborative story-writing; and engaging in immersive conversation, with AI acting as a Socratic tutor – an approach currently being explored by the likes of the Khan Academy and Duolingo in its Max premium subscription version) as well as in terms of preparation for present and future life needs outside classrooms (including the need to use AI in professional workplaces, as well as when interacting with chatbots and automated services provided by government organisations and corporations).

We then quickly moved on to discussing the challenges raised by generative AI, and the need for teachers and students to take a critical stance towards this rapidly evolving technology. In particular, this entails the development of AI literacy, which intersects with a number of other key digital literacies: prompt literacy, search literacy, attentional literacy and, perhaps above all, information literacy and critical literacy. We should also remember that not all students are ready or able to use this technology: accessibility is a major issue for many, especially in communities of recent migrants and refugees. Neither are all teachers ready: in some cases, some of our students may have more awareness of and facility with the technology that we do, but it’s crucial that we upskill ourselves and help students develop the aforementioned critical perspective that may sometimes be missing.

Questions and comments from the audiences at both panels were revealing: it’s clear that for many educators, the initial wave of consternation that accompanied the release of ChatGPT and the following wave of genAI has subsided, and teachers are finding productive ways to build such technologies into their teaching, their students’ learning activities, and even their assessments. Our reflective conversations and exchanges of ideas about how to best incorporate these technologies into education augur well for the future.

In coming years, we’ll no doubt be hearing a lot more presentations and panels about generative AI and its place in language learning and education more broadly. Meanwhile, photos from the panel are available on Twitter/X.

Skip to toolbar