AILA World Congress
Rio de Janeiro, Brazil
23-28 July 2017

Praia da Barra da Tijuca, Rio de Janeiro, Brazil

Praia da Barra da Tijuca, Rio de Janeiro, Brazil. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

Having participated in the last two AILA World Congresses, in Beijing in 2011 and in Brisbane in 2014, I was delighted to be able to attend the 18th World Congress, taking place this time in the beautiful setting of Rio de Janeiro, Brazil. This year’s theme was “Innovations and Epistemological Challenges in Applied Linguistics”. As always, the conference brought together a large and diverse group of educators and researchers working in the broad field of applied linguistics, including many with an interest in digital and mobile learning, and digital literacies and identities. Papers ranged from the highly theoretical to the very applied, with some of the most interesting presentations actively seeking to build much-needed bridges between theory and practice.

In her presentation, E-portfolios: A tool for promoting learner autonomy?, Chung-Chien Karen Chang suggested that e-portfolios increase students’ motivation, promote different assessment criteria, encourage students to take charge of their learning, and stimulate their learning interests. Little (1991) looked at learner autonomy as a set of conditional freedoms: learners can determine their own objectives, define the content and process of their learning, select the desired methods and techniques, and monitor and evaluate their progress and achievements. Benson (1996) spoke of three interrelated levels of autonomy for language learners, involving the learning process, the resources, and the language. Benson and Voller (1997) emphasised four elements that help create a learning environment to cultivate learner autonomy, namely when learners can:

  • determine what to learn (within the scope of what teachers want them to learn);
  • acquire skills in self-directed learning;
  • exercise a sense of responsibility;
  • be given independent situations for further study.

Those who are intrinsically motivated are more self-regulated; in contrast, extrinsically motivated activities are less autonomous and more controlled. But either way, psychologically, students will be motivated to move forward.

The use of portfolios provides an alternative form of assessment. A portfolio can echo a process-oriented approach to writing. Within a multi-drafting process, students can check their own progress and develop a better understanding of their strengths and weaknesses. Portfolios offer multi-dimensional perspectives on student progress over time. The concept of e-portfolios is not yet fully fixed but includes the notion of collections of tools to perform operations with e-portfolio items, and collections of items for the purpose of demonstrating competence.

In a study with 40 sophomore and junior students, all students’ writing tasks were collected in e-portfolios constituting 75% of their grades. Many students agreed that writing helped improve their mastery of English, their critical thinking ability, their analytical skills, and their understanding of current events. They agreed that their instructor’s suggestions helped them improve their writing. Among the 40 students assessed on the LSRQ survey, the majority showed intrinsic motivation. Students indicated that the e-portfolios gave them a sense of freedom, and allowed them to  challenge and ultimately compete against themselves.

Gamification emerged as a strong conference theme. In her paper, Action research on the influence of gamification on learning IELTS writing skills, Michelle Ocriciano indicated that the aim of gamification, which has been appropriated by education from the fields of business and marketing, is to increase participation and motivation. Key ‘soft gamification’ elements include points, leaderboards and immediate feedback; while these do not constitute full gamification, they can nevertheless have benefits. She conducted action research to investigate the question: how can gamification apply to a Moodle setting to influence IELTS writing skills? She found that introducing gamification elements into Moodle – using tools such as GameSalad, Quizlet, ClassTools, Kahoot! and Quizizz – not only increased motivation but also improved students’ spelling, broadened their vocabulary, and decreased the time they needed for writing, leading to increases in their IELTS writing scores. To some extent, students were learning about exam wiseness. The most unexpected aspect was that her feedback as the teacher increased in effectiveness, because students shared her individual feedback with peers through a class WhatsApp group. In time, students also began creating their own games.

The symposium Researching digital games in language learning and teaching, chaired by Hayo Reinders and Sachiko Nakamura, naturally also brought gaming and gamification to the fore in a series of presentations.

In their presentation, Merging the formal and the informal: Language learning and game design, Leena Kuure, Salme Kälkäjä and Marjukka Käsmä reported on a game design course taught in a Finnish high school. Students would recruit their friends onto the course, and some even repeated the course for fun. It was found that the freedom given to students did not necessarily mean that they took more responsibility, but rather this varied from student to student. Indeed, the teacher had a different role for each student, taking or giving varying degrees of responsibility. Students chose to use Finnish or English, depending on the target groups for the games they were designing.

The presenters concluded that in a language course like this, language is not so much the object of study (where it is something ‘foreign’ to oneself) but rather it is a tool (where it is part of oneself, and part of an expressive repertoire). Formal vs informal, they said, seems to be an artificial distinction. The teacher’s role shifts, with implications for assessment, and a requirement for the teacher to have knowledge of individual students’ needs. The choice of project should support language choice; this enables authentic learning situations and, through these, ‘language as a tool’ thinking.

In her presentation, The role of digital games in English education in Japan: Insights from teachers and students, Louise Ohashi began by referencing the gaming principles outlined in the work of James Paul Gee. She reported on a study of students’ experiences of and attitudes to using digital games for English study, as well as teachers’ experiences and attitudes. She surveyed 102 Japanese university students, and 113 teachers from high schools and universities. Students, she suggested, are not as interested as teachers in distinguishing ‘real’ games from gamified learning tools.

While 31% of students had played digital games in English in class over the previous 12 months, 50% had done so outside class, suggesting a clear trend towards out-of-class gaming. The games they reported playing covered the spectrum from general commercial games to dedicated language learning or educational games. Far more students than teachers thought games were valuable aids to study inside and outside class, as well as for self-study. Only 30% of students said that they knew of appropriate games for their English level, suggesting an area where teachers might be able to intervene more.

In fact, most Japanese classrooms are quite traditional learning spaces – often with blackboards and wooden desks, and no wifi – which do not lend themselves to gaming in class. While some teachers use games, many avoid them. One teacher surveyed thought students wouldn’t be interested in games; another worked at a school where students were not allowed to use computers or phones; another thought the school and parents would disapprove; others emphasised the importance of a focus on academic coursework rather than gaming; and still others objected to the idea that foreign teachers in Japan are supposed to entertain students. She concluded that most students were interested in playing games but most teachers did not introduce them, by choice or otherwise, possibly representing a missed opportunity.

In her presentation, Technology in support of heritage language learning, Sabine Little reported on an online questionnaire with 112 respondents, examining how families from heritage language backgrounds use technology to support heritage language literacy development for their primary school students. Two thirds of the families spoke two or more heritage languages in the home. She found that where there were children of different ages, use of the heritage language would often decrease for younger children.

Parents were gatekeepers of both technology use and choices of apps; but many parents didn’t have the technological understanding to identify apps or games their children might be interested in. Many thought that there were no apps in their language. Some worried about health issues; others worried about cost. There are both advantages and disadvantages in language learning games; many of these have no cultural content as they’re designed to work with more than one language. Similarly, authentic language apps have both advantages (e.g., they feel less ‘educational’) and disadvantages (e.g., they may be too linguistically difficult). Nevertheless, many parents agreed that their children were interested in games for language learning, and more broadly in learning the heritage language.

All in all, this is an incredibly complex field. How children engage with heritage language resources is linked to their sense of identity as pluricultural individuals. Many parents are struggling with the ‘bad technology’/’good language learning opportunity’ dichotomy. In general, parents felt less confident about supporting heritage language literacy development through technology than through books.

In my own presentation, Designing for situated language and literacy: Learning through mobile augmented reality games and trails, I discussed the places where online gaming meets the offline world. I focused on mobile AR gamified learning trails, drawing on examples of recent, significant, informative projects from Singapore, Indonesia and Hong Kong. The aim of the presentation was to whet the appetite of the audience for the possibilities that emerge when we bring together online gaming, mobility, augmented reality, and language learning.

AR and big data were also important conference themes. In his paper, The internet of things: Implications for learning beyond the classroom, Hayo Reinders suggested that algorithmic approaches like Bayesian Networks, Nonnegative Matrix Factorization, Native Forests, and Association Rule Mining are beginning to help us make sense of vast amounts of data. Although they are not familiar to most of today’s teachers, they will be very familiar to future teachers. We are gradually moving from reactive to proactive systems, which can predict future problems in areas ranging from health to education. Current education is completely reactive; we wait for students to do poorly or fail before we intervene. Soon we will have the opportunity to change to predictive systems. All of this is enabled by the underpinning technologies becoming cheaper, smaller and more accessible.

He spoke about three key areas of mobility, ubiquity, and augmentation. Drawing on Klopfer et al (2002), he listed five characteristics of mobile technologies which could be turned into affordances for learning: portability; social interactivity; context sensitivity; connectivity; and individuality. These open up a spectrum of possibilities, he indicated, where the teacher’s responsibility is to push educational experiences towards the right-hand side of each pair:

  • Disorganised – Distributed
  • Unfocused – Collaborative
  • Inappropriate – Situated
  • Unmanageable – Networked
  • Misguided – Autonomous

Augmentation is about overlaying digital data, ranging from information to comments and opinions, on real-world settings. Users can add their own information to any physical environment. Such technologies allow learning to be removed from the physical constraints of the classroom.

With regard to ubiquity, when everything is connected to everything else, there is potentially an enormous amount of information generated. He described a wristband that records everything you do, 24/7, and forgets it after two minutes, unless you tap it twice to save what has been recorded and have it sent to your phone. Students can use this, for example, to save instances of key words or grammatical structures they encounter in everyday life. Characteristics of ubiquity that have educational implications include the following:

  • Permanency can allow always-on learning;
  • Accessibility can allow experiential learning;
  • Immediacy can allow incidental learning;
  • Interactivity can allow socially situated learning.

He went on to outline some key affordances of new technologies, linked to the internet of things, for learning:

  • Authentication for attendance when students enter the classroom;
  • Early identification and targeted support;
  • Adaptive and personalised learning;
  • Proactive and predictive rather than reactive management of learning;
  • Continuous learning experiences;
  • Informalisation;
  • Empowerment of students through access to their own data.

He wrapped up by talking about the Vital Project that gives students visualisation tools and analytics to monitor online language learning. Research has found that students like having access to this information, and having control over what information they see, and when. They want clear indications of progress, early alerts and recommendations for improvement. Cultural differences have also been uncovered in terms of the desire for comparison data; the Chinese students wanted to know how they were doing compared with the rest of the class and past cohorts, whereas non-Chinese did not.

There are many questions remaining about how we can best make use of this data, but it is already coming in a torrent. As educators, we need to think carefully about what data we are collecting, and what we can do with it. It is only us, not computer scientists, who can make the relevant pedagogical decisions.

In his paper, Theory ensembles in computer-assisted language learning research and practice, Phil Hubbard indicated that the concept of theory was formerly quite rigidly defined, and involved the notion of offering a full explanation for a phenomenon. It has now become a very fluid concept. Theory in CALL, he suggested, means the set of perspectives, models, frameworks, orientations, approaches, and specific theories that:

  • offer generalisations and insights to account for or provide greater understanding of phenomena related to the use of digital technology in the pursuit of language learning objectives;
  • ground and sustain relevant research agendas;
  • inform effective CALL design and teaching practice.

He presented a typology of theory use in CALL:

  • Atheoretical CALL: research and practice with no explicit theory stated (though there may be an implicit theory);
  • Theory borrowing: using a theory from SLA, etc, without change;
  • Theory instantiation: taking a general theory with a place for technology and/or SLA into consideration (e.g., activity theory);
  • Theory adaptation: changing one or more elements of a theory from SLA, etc, in anticipation of or in response to the impact of the technology;
  • Theory ensemble: combining multiple theoretical entities in a single study to capture a wider range of perspectives;
  • Theory synthesis: creating a new theory by integrating parts of existing ones;
  • Theory construction: creating a new theory specifically for some sub-domain of CALL;
  • Theory refinement: cycles of theory adjustment based on accumulated research findings.

He went on to provide some examples of research approaches based on theory ensembles. We’re just getting started in this area, and it needs further study and refinement. Theory ensembles seem to occur especially in CALL studies involving gaming, multimodality, and data-driven learning. Theory ensembles may be ‘layered’, with a broad theory providing an overarching approach of orientation, and complementary narrower theoretical entities providing focus. Similarly, members of a theory ensemble have different functions and therefore different weights in the overall picture. Some can be more central than others. A distinction might be made, he suggested, between one-time ensembles assembled for a given problem and context, and more stable ones that could lead to full theory syntheses. Finally, each ensemble member should have a clear function, and together they should lead to a richer and more informative analysis; researchers and designers should clearly justify the membership of ensembles, and reviewers should see that they do so.

Intercultural issues surfaced in many papers, perhaps most notably in the symposium Felt presence, imagined presence, hyper-presence in online intercultural encounters: Case studies and implications, chaired by Rick Kern and Christine Develotte. It was suggested by Rick Kern that people often imagine online communication is immediate, but in fact it is heavily technologically mediated, which has major implications for the nature of communication.

In their paper, Multimodality and social presence in an intercultural exchange setting, Meei-Ling Liaw and Paige Ware indicated that there is a lot of research on multimodality, communication differences, social presence and intercultural communication, but it is inconclusive and sometimes even contradictory. They drew on social presence theory, which postulates that a critical factor in the viability of a communication medium is the degree of social presence it affords.

They reported on a project involving 12 pre-service and 3 in-service teachers in Taiwan, along with 15 undergraduate Education majors in the USA. Participants were asked to use VoiceThread, which allows text, audio and video communication, and combinations of these. Communication was in English, and was asynchronous because of the time difference. It was found that the US students used video exclusively, but the Taiwanese used a mixture of modalities (text, audio and video). The US students found video easy to use, but some Taiwanese students worried about their oral skills and felt they could organise their thoughts better in text; however, other Taiwanese students wanted to practise their oral English. All partnerships involved a similar volume of words produced, perhaps indicating that the groups were mirroring each other. In terms of the types of questions posed, the Taiwanese asked far more questions about opinions; the American students were more cautious about asking such questions, and also knew little about Taiwan and so asked more factual questions. Overall, irrespective of the modality employed, the two groups of intercultural telecollaborative partners felt a strong sense of membership and thought that they had achieved a high quality of learning because of the online partnership.

As regards the pedagogical implications, students need to be exposed to the range of features available in order to maximise the affordances of all the multimodal choices. In addition to helping students consider how they convey a sense of social presence through the words and topics they choose, instructors need to attend to how social presence is intentionally or unintentionally communicated in the choice of modality. The issue of modality choice is also intimately connected to the power dynamic that can emerge when telecollaborative partnerships take place as monolingual exchanges.

In their paper, Conceptualizing participatory literacy: New approaches to sustaining co-presence in social and situated learning communities, Mirjam Hauck, Sylvie Warnecke and Muge Satar argued that teacher preparation needs to address technological and pedagogical issues, as well as sociopolitical and ecological embeddedness. Both participatory literacy and social presence are essential, and require multimodal competence. The challenge for educators in social networking environments is threefold: becoming multimodally aware and able to first establish their own social presence, and then successfully participating in the collaborative creation and sharing of knowledge, so that they are well-equipped to model such an ability and participatory skills for their students.

Digital literacy/multiliteracy in general, and participatory literacy in particular, is reflected in language learners’ ability to comfortably alternate in their roles as semiotic responders and semiotic initiators, and the degree to which they can make informed use of a variety of semiotic resources. The takeaway from this is that being multimodally able and as a result a skilled semiotic initiator and responder, and being able to establish social presence and participate online, is a precondition for computer-supported collaborative learning (CSCL) of languages and cultures.

They reported on a study with 36 pre-service English teachers learning to establish social presence through web 2.0 tools. Amongst other things, students were asked to reflect on their social presence in the form of a Glogster poster referring to Gilly Salmon’s animal metaphors for online participation (see p.12); students showed awareness that social presence is transient and emergent.

They concluded that educators need to be able to illustrate and model for their students the interdependence between being multimodally competent as reflected in informed semiotic activity, and the ability to establish social presence and display participatory literacy skills. Tasks like those in the training programme presented here, triggering ongoing reflection on the relevance of “symbolic competence” (Kramsch, 2006), social presence and participatory literacy, need to become part of CSCL-based teacher education.

In his presentation, Seeing and hearing apart: The dilemmas and possibilities of intersubjectivity in shared language classrooms, David Malinowski spoke about the use of high-definition video conferencing for synchronous class sessions in languages with small enrolments, working across US institutions.

It was found that technology presents an initial disruption which is overcome early in the semester, and does not prevent social cohesion. There is the ability to co-ordinate perspective-taking, dialogue, and actions with activity type and participation format. Synchronised performance, play and ritual may deserve special attention in addition to sequentially oriented events. History is made in the moment: durable learner identities inflect moment to moment, and there are variable engagements through and with technology. There are ongoing questions about parity of the educational experience in ‘sending’ and ‘receiving’ classrooms. Finally, there is a need to develop further tools to mediate the life-worlds of distance language learners across varying timescales.

Christo Redentor, Rio de Janeiro, Brazil

Christo Redentor, Rio de Janeiro, Brazil. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

There were many presentations that ranged well beyond CALL, and to some extent beyond educational technologies, but which nevertheless had considerable contextual relevance for those working in CALL and MALL, and e-learning and mobile learning more broadly.

The symposium Innovations and challenges in digital language practices and critical language/media awareness for the digital age, chaired by Jannis Androutsopoulos, consisted of a series of papers on the nature of digital communication, covering themes such as the link between language use and language ideology; multimodality; and the use of algorithms. One key question, it was suggested in the introduction, is how linguistic research might speak to language education.

In their presentation, Critical media awareness in a digital age, Caroline Tagg and Philipp Seargeant stated that people’s critical awareness develops fluidly and dynamically over time in response to experiences online. They introduced the concept of context design, which suggests that context is collaboratively co-constructed in interaction through linguistic choices. The concept draws on the well-known notion of context collapse, but suggests that offline contexts cannot simply move online and collapse; rather, contexts are always actively constructed, designed and redesigned. Context design incorporates the following elements:

  • Participants
  • Online media ideologies
  • Site affordances
  • Text type
  • Identification processes
  • Norms of communication
  • Goals

They reported on a study entitled Creating Facebook (2014-2016). Their interviews revealed complex understandings of Facebook as a communicative space and the importance of people’s ideas about social relationships. These understandings shaped behaviour in often unexpected ways, in processes that can be conceptualised as context design. They concluded that the role of people’s evolving language/media awareness in shaping online experiences needs to be taken into account by researchers wishing to effectively build a critical awareness for the digital age.

In her paper, Why are you texting me? Emergent communicative practices in spontaneous digital interactions, Maria Grazia Sindoni suggested that multimodality is a reaction against language-driven approaches that sideline resources other than language. However, language as a resource has been sidelined in mainstream multimodality research. Yet language still needs to be studied, but on a par with other semiotic resources.

In a study of reasons for mode-switching in online video conversations, she indicated that the technical possibility of doing something does not equate with the semiotic choice of doing so. In the case of communication between couples, she noted a pattern where intimate communications often involve a switch from speech to text. She also presented a case where written language was used to reinforce spoken language; written conventions can thus be creatively resemiotised.

There are several layers of meaning-making present in such examples: creative communicative functions in language use; the interplay of semiotic resources other than language that are co-deployed by users to adapt to web-mediated environments (e.g., the impossibility of perfectly reciprocating gaze, em-/disembodied interaction, staged proxemics, etc); different technical affordances (e.g., laptop vs smartphone); and different communicative purposes and degrees of socio-semiotic and intercultural awareness. She concluded with a critical agenda for research on web-mediated interaction, involving:

  • recognising the different levels (above) and their interplay;
  • encouraging critical awareness of video-specific patterns in syllabus design and teacher training;
  • promoting understanding of what can hinder or facilitate interaction (also in an intercultural light);
  • technical adaptivity vs semiotic awareness.

In their paper, Digital punctuation: Practices, reflexivity and enregistrement in the case of <.>, Jannis Androutsopoulos and Florian Busch referred to David Crystal’s view that in online communication the period has almost become an emoticon, one which is used to show irony or even aggression. They went on to say that the use of punctuation in contemporary online communication goes far beyond the syntactic meanings of traditional punctuation; punctuation and emoticons have become semiotic resources and work as contextualisation cues that index how a communication is to be understood. There is currently widespread media discussion of the use of punctuation, including specifically about the disappearance of the period. They distanced themselves from Crystal’s view of “linguistic free love” and the breaking of rules in the use of punctuation on the internet, suggesting that there are clear patterns emerging.

Reporting on a study of the use of punctuation in WhatsApp conversations by German students, they found relatively low use of the period. This suggests that periods are largely being omitted, and when they do occur, they generally do so within messages where they fulfil a syntactic function. They are very rare at the end of messages, where they may fulfil a semiotic function. For example, periods may be used for register switching, indicating a change to a more formal register; or to indicate unwillingness to participate in further conversation. Use of periods by one user may even be commented on by other users in a case of metapragmatic reflexivity. It was commented by interviewees that the use of periods at the end of messages is strange and annoying in the context of informal digital writing, especially as the WhatsApp bubbles already indicate the end of messages. One interviewee commented that the use of punctuation in general, and final periods in particular, can express annoyance and make a message appear harsher, signalling the bad mood of the writer. The presenters concluded that digital punctuation offers evidence of ongoing elaboration of new registers of writing in the early digital age.

In his presentation, The text is reading you: Language teaching in the age of the algorithm, Rodney Jones suggested that we should begin talking to students about digital texts by looking at simple examples like progress bars; as he explained, these do not represent the actual progress of software installation but are underpinned by an algorithm that is designed to be psychologically satisfying, thus revealing the disparity between the performative and the performance.

An interesting way to view algorithms is through the lens of performance. He reported on a study where his students identified and analysed the algorithms they encounter in their daily lives. He highlighted a number of key themes in our beliefs about algorithms:

  • Algorithmic Agency: ‘We sometimes believe the algorithm is like a person’; we may negotiate with the algorithm, changing our behaviour to alter the output of the algorithm
  • Algorithmic Authority (a term by Clay Shirky, who defines it as our tendency to believe algorithms more than people): ‘We sometimes believe that the algorithm is smarter than us’
  • Algorithm as Adversary: ‘We believe the algorithm is something we can cheat or hack’; this is seen in student strategies for altering TurnItIn scores, or in cases where people play off one dating app against another
  • Algorithm as Conversational Resource: ‘We think we can use algorithms to talk to others’; this can be seen for example when people tailor Spotify feeds to impress others and create common conversational interests
  • Algorithm as Audience: ‘We believe that algorithms are watching us’; this is the sense that we are performing for our algorithms, such as when students consider TurnItIn as their primary audience
  • Algorithm as Oracle: ‘We sometimes believe algorithms are magic’; this is seeing algorithms as fortune tellers or as able to reveal hidden truths, involving a kind of magical thinking

The real pleasure we find in algorithms is the sense that they really know us, but there is a lack of critical perspective and an overall capitulation to the logic of the algorithm, which is all about the monetisation of our data. There is no way we can really understand algorithms, but we can think critically about the role they play in our lives. He concluded with a quote from Ben Ratliff, a music critic at The New York Times: “Now the listener’s range of access is vast, and you, the listener, hold the power. But only if you listen better than you are being listened to”.

In her presentation, From hip-hop pedagogies to digital media pedagogies: Thinking about the cultural politics of communication, Ana Deumert discussed the privileging of face-to-face conversation in contemporary culture; a long conversation at a dinner party would be seen as a success, but a long conversation on social media would be seen as harmful, unhealthy, a sign of addiction, or at the very least a waste of time. Similarly, it is popularly believed that spending a whole day reading a book is good; but reading online for a whole day is seen as bad.

She asked what we can learn from critical hip-hop studies, which challenge discourses of school versus non-school learning. She also referred to Freire, who considered that schooling should establish a connection between learning in school and learning in everyday life outside school. New media, she noted, have offered opportunities to minorities, the disabled, and speakers of minority languages. If language is seen as free and creative, then it is possible to break out of current discourse structures. Like hip-hop pedagogies, new media pedagogies allow us to bring new perspectives into the classroom, and to address the tension between institutional and vernacular communicative norms through minoritised linguistic forms and resources. She went on to speak of Kenneth Goldsmith’s course Wasting Time on the Internet at the University of Pennsylvania (which led to Goldsmith’s book on the topic), where he sought to help people think differently about what is happening culturally when we ‘waste’ time online. However, despite Goldsmith’s comments to the contrary, she argued that online practices always have a political dimension. She concluded by suggesting that we need to rethink our ideologies of language and communication; to consider the semiotics and aesthetics of the digital; and to look at the interplay of power, practice and activism online.

Given the current global sociopolitical climate, it was perhaps unsurprising that the conference also featured a very timely strand on superdiversity. The symposium Innovations and challenges in language and superdiversity, chaired by Miguel Pérez-Milans, highlighted the important intersections between language, mobility, technology, and the ‘diversification of diversity’ that characterises increasing areas of contemporary life.

In his presentation, Engaging superdiversity – An empirical examination of its implications for language and identity, Massimiliano Spotti stressed the importance of superdiversity, but indicated that it is not a flawless concept. Since its original use in the UK context, the term has been taken up in many disciplines and used in different ways. Some have argued that it is theoretically empty (but maybe it is conceptually open?); that it is a banal revisitation of complexity theory (but their objects of enquiry differ profoundly); that it is naïve about inequality (but stratification and ethnocentric categories are heavily challenged in much of the superdiversity literature); that it lacks a historical perspective (he agreed with this); that it is neoliberal (the subject it produces is a subject that fits the neoliberal emphasis on lifelong learning); and that it is Eurocentric, racist and essentialist.

He went on to report on research he has been conducting in an asylum centre. Such an asylum seeking centre, he said, is effectively ‘the waiting room of globalisation’. Its guests are mobile people, and often people with a mobile. They may be long-term, short-term, transitory, high-skilled, low-skilled, highly educated, low-educated, and may be on complex trajectories. They are subject to high integration pressure from the institution. They have high insertional power in the marginal economies of society. Their sociolinguistic, ethnic, religious and educational backgrounds are not presupposable.

In his paper, ‘Sociolinguistic superdiversity’: Paradigm in search of explanation, or explanation in search of paradigm?, Stephen May went back to Vertovec’s 2007 work, focusing on the changing nature of migration in the UK; ethnicity was too limiting a focus to capture the differences of migrants, with many other variables needing to be taken into account. Vertovec was probably unaware, May suggested, of the degree of uptake the term ‘superdiversity’ would see across disciplines.

May spoke of his own use of the term ‘multilingual turn’, and referred to Blommaert’s emphasis on three key aspects of superdiversity, namely mobility, complexity and unpredictability. The new emphasis on superdiversity is broadly to be welcomed, he suggested, but there are limitations. He outlined four of these:

  • the unreflexive ethnocentrism of western sociolinguistics and its recent rediscovery of multilingualism as a central focus; this is linked to a ‘presentist’ view of multilingualism, with a lack of historical focus
  • the almost exclusive focus on multilingualism in urban contexts, constituting a kind of ‘metronormativity’ compared to ‘ossified’ rural/indigenous ‘languages’, with the former seen as contemporary and progressive, thus reinforcing the urban/rural divide
  • a privileging of individual linguistic agency over ongoing linguistic ‘hierarchies of prestige’ (Liddicoat, 2013)
  • an ongoing emphasising of parole over langue; this is still a dichotomy, albeit an inverted one, and pays insufficient attention to access to standard language practices; it is not clear how we might harness different repertoires within institutional educational practices

In response to such concerns, Blommaert (2015) has spoken about paradigmatic superdiversity, which allows us not only to focus on contemporary phenomena, but to revisit older data to see it in a new light. There are both epistemological and methodological implications, he went on to say. There is a danger, however, in a new orthodoxy which goes from ignoring multilingualism to fetishising or co-opting it. We also need to attend to our own positionality and the power dynamics involved in who is defining the field. We need to avoid superdiversity becoming a new (northern) hegemony.

In her paper, Superdiversity as reality and ideology, Ryuko Kubota echoed the comments of the previous speakers on human mobility, social complexity, and unpredictability, all of which are linked to linguistic variability. She suggested that superdiversity can be seen both as an embodiment of reality as well as an ideology.

Superdiversity, she said, signifies a multi/plural turn in applied linguistics. Criticisms include the fact that superdiversity is nothing extraordinary; many communities maintain homogeneity; linguistic boundaries may not be dismantled if analysis relies on existing linguistic units and concepts; and it may be a western-based construct with an elitist undertone. As such, superdiversity is an ideological construct. In neoliberal capitalism there is now a pushback against diversity, as seen in nationalism, protectionism and xenophobia. But there is also a complicity of superdiversity with neoliberal multiculturalism, which values diversity, flexibility and fluidity. Neoliberal workers’ experiences may be superdiverse or not so superdiverse; over and against linguistic diversity, there is a demand for English as an international language, courses in English, and monolingual approaches.

One emerging question is: do neoliberal corporate transnational workers engage in multilingual practices or rely solely on English as an international language? In a study of language choice in the workplace with Japanese and Korean transnational workers in manufacturing companies in non-English dominant countries, it was found that nearly all workers exhibited multilingual and multicultural consciousness. There was a valorisation of both English and a language mix in superdiverse contexts, as well as an understanding of the need to deal with different cultural practices. That said, most workers emphasised that overall, English is the most important language for business. Superdiversity may be a site where existing linguistic, cultural and other hierarchies are redefined and reinforced. Superdiversity in corporate settings exhibits contradictory ideas and trends.

In terms of neoliberal ideology, superdiversity, and the educational institution, she mentioned expectations such as the need to produce original research at a sustained pace; to conform to the conventional way of expressing ideas in academic discourse; and to submit to conventional assessment linked to neoliberal accountability. Consequences include a proliferation of trendy terms and publications; and little room for linguistic complexity, flexibility, and unpredictability. She went on to talk about who benefits from discussing superdiversity. Applied linguistics scholars are embedded in unequal relations of power. As theoretical concepts become fetishised, the theory serves mainly the interests of those who employ it, as noted by Anyon (1994). It is necessary for us to critically reflect, she said, on whether the popularity of superdiversity represents yet another example of concept fetishism.

In conclusion, she suggested that superdiversity should not merely be celebrated without taking into consideration historical continuity, socioeconomic inequalities created by global capitalism, and the enduring ideology of linguistic normativism. Research on superdiversity also requires close attention to the sociopolitical trend of increasing xenophobia, racism, and assimilationism. Ethically committed scholars, she said, must recognise the ideological nature of trendy concepts such as superdiversity, and explore ways in which sociolinguistic inquiries can actually help narrow racial, linguistic, economic and cultural gaps.

Rio de Janeiro viewed from Pão de Açúcar

Rio de Janeiro viewed from Pão de Açúcar. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

AILA 2017 wrapped up after a long and intensive week, with conversations to be continued online and offline until, three years from now, AILA 2020 takes place in Groningen in the Netherlands.

International connections

Hotel Ciputra, Jakarta, Indonesia, 8-9 November 2008

This year’s GloCALL Conference focused on Globalization and Localization in CALL, bringing together presenters and participants from a wide variety of countries to discuss their shared interest in the broad – and expanding – field of computer-assisted language learning. We spent two intensive days in the Hotel Ciputra, many floors above the busy, traffic-filled streets of the Indonesian capital, sharing international, national and local perspectives on technology-enhanced communication and collaboration, much of it facilitated by web 2.0 tools. Key themes included the fostering of collaboration and growth of community through CALL, and the vast range of CALL manifestations, each of which may be appropriate to different students indifferent contexts. There was a notable focus on the use of audio and/or video in conjunction with blogs, e-portfolios, digital storytelling, podcasting and m-learning.

Blogging was the focus of Penny Coutas’s session, Blogging for learning, teaching and researching languages, in which she demonstrated the principles behind blogging in an interactive paper-based exercise, before going on to outline the uses of blogs for learners, teachers and researchers. She stressed that the value of blogs lies as much in the interactions and community building that go on around them as it does in the actual blog postings themselves.

Podcasting was the focus of Wai Meng Chan’s plenary, Harnessing mobile technologies for foreign language learning: The example of podcasting. After reviewing the literature on podcasting, he described a research project conducted at NUS, which showed very positive overall student reactions to podcasting. He noted that podcasting can lead to a great variety of different kinds of language practice.

My own talk, entitled Web 2.0: Connecting the local and the global, discussed the ways in which a variety of web 2.0 tools, including blogs, wikis, rss, podcasting, vodcasting and virtual worlds, can be used to connect the local and the global as part of the language learning process. These tools can help students not only to learn language, but also to begin to develop the local and global linguistic affiliations which are so important for today’s citizens.

There is continued interest in the area of e-portfolios, complemented by rapidly growing interest in digital storytelling, as reflected in a number of talks and workshops. Debra Hoven, in a paper entitled Digital storytelling and eportfolios for language teaching and learning, spoke of digital stories, whether collaborative or individual, as a valuable mode of communication. She noted that digital stories can be used for reflection, sharing, presentation, showcasing knowledge or skills, and can even function as part of or in conjunction with e-portfolios. Typical goals may include improvement of L1 and L2 literacy as well as multiliteracy skills, (re-)connecting with family, culture and traditions, and intergenerational communication. They can be a means of expression, an avenue of creativity, a way to make the mainstream curriculum more meaningful, and can help L2 learners to find their own voices. They are, ultimately, about language for real purposes and real audiences, involving practice in the following areas:

  • writing/scripting (grammar, vocabulary, syntax, genre, register, audience, interest)
  • communicating a message
  • organising ideas

The notion of community was also stressed by Peter Gobel in his paper, Digital storytelling: Capturing experience and creating community. He described a pilot project conducted with Japanese learners of English from Kyoto University, who were asked to create digital stories about key experiences on overseas language learning trips from which they had recently returned.

A number of language areas were involved:

  • topic choice – focus
  • narrative awareness – voice and audience
  • organisational skill – expression of ideas
  • mixed media (created and found objects)

In addition, students required scaffolding in multimedia and digital composition skills. Overall benefits of the exercise included:

  • debriefing after the trip
  • creating a database (to be consulted by future students travelling overseas)
  • reflection on learning experiences
  • comparison and sharing of experiences
  • creating a social network of shared experiences

There is also continued and even growing interest in open source software such as Moodle (which was covered in a number of presentations) and Drupal, as well as other freeware which can be used in language teaching. John Brine, in a paper entitled English language support for a computer science course using FLAX and Moodle, outlined developments around the New Zealand Digital Library Project run by the University of Waikato, with particular focus on the Greenstone Digital Library and the FLAX (Flexible Language Acquisition) Project, which allows language exercises to be created based on freely available material drawn from web sources such as Wikipedia and the Humanity Development Library. There is now a prototype version of a FLAX module for Moodle, which allows students to collaborate on language exercises.

Phil Hubbard’s plenary focused on the need for Integrating learner training into CALL classrooms and materials. He argued that CALL can give students more control over – and thus more responsibility for – their own learning, but that they are generally not prepared to take on this responsibility and so need training in this area. Reiterating the learner training principles he outlined at WorldCALL 2008, he concluded that it is not just the technology that matters; nor is it just a case of how teachers use the technology; rather, it is important to train learners to use it effectively. In his paper, entitled An invitation to CALL: A guided tour of computer-assisted language learning, he introduced the online site which underpins his own teacher training course, An invitation to CALL.

In her plenary, Individuals, community, communication and language pedagogy: Emerging technologies that are shaping and are being shaped by our field, Debra Hoven suggested that rather than using multiple, slightly different terms to describe different aspects of language learning with technology, we should work with one main term (such as CALL) to maintain cohesion in the field. She went on to argue against chronological classifications of CALL which, she said, do not really capture what people are doing with the technology. She proposed her own six-part model to capture the main roles of CALL:

  1. Instructional/tutorial CALL (language classroom applications, sites such as Randall’s ESL Lab)
  2. Discovery/exploratory CALL (simulations, roleplays, webquests)
  3. Communications CALL (CMC involving language for real communication purposes)
  4. Social networked CALL (blogging, microblogging, photosharing, SNS and social bookmarking)
  5. Collaborative CALL (notably wikis)
  6. Narrative/reflective CALL (digital storytelling and e-portfolios)

It became apparent in a number of talks that, while educators around the world share similar interests and concerns with the use of technology, there are also important geographical differences. In his opening plenary, entitled CALL implementation in Indonesia – Yesterday, today and tomorrow, Indra Charismiadji explained that obstacles to use of recent educational technologies in Indonesia include technological issues such as lack of hardware, software and internet connectivity; policy issues such as governmental and institutional support for behaviourist pedagogical approaches; teachers’ resistance to change; and a general lack of computer literacy. Computer-based teaching (which fits with a transmission pedagogy where the teacher remains in control) may represent a first step towards broader adoption of more recent e-learning approaches and tools.

All in all, it was fascinating to compare CALL perspectives and experiences, noting some differences but also the considerable similarities in educators’ interests around the world.

Technology bridging the world

Fukuoka International Congress Center, Fukuoka, Japan, 6-8 August 2008

The theme of WorldCALL 2008, the five-yearly conference now being held for the third time, was “CALL bridges the world”.  With participants from over 50 countries, and presentations on every aspect of language teaching through technology, it became a self-fulfilling prophecy.

Key themes

Key themes of the conference included the need for a sophisticated understanding of our technologies and their affordances; the importance of teacher involvement and task design in maximising collaboration and online community; the potential for intercultural interaction; the role of cultural and sociocultural issues; the need for reflection on the part of both teachers and students on all of the above; and, in particular, the need for much more extensive teacher training.

There was a wide swathe of technologies, tools and approaches covered, including:

  • email;
  • VLEs, in particular, Moodle;
  • web 2.0 tools, especially blogs and m-learning/mobile phones, but also microblogging, wikis, social networking, and VoIP/Skype;
  • borderline web 2.0/web 3.0 tools like virtual worlds and avatars;
  • ICALL, speech recognition and TTS software;
  • blended learning;
  • e-portfolios.

With up to 8 concurrent sessions running at any given moment, it was impossible to keep up with everything, but here’s a brief selection of themes and ideas …

Communication & collaboration

In her paper “Mediation, materiality and affordances”, Regine Hampel considered the contrasting views that the new media have the advantage of quantitatively increasing communication but the disadvantage of creating reduced-cue communication environments.  She concluded that there are many advantages to using computer-mediated communication with language learners, but that we need to focus on areas such as:

  • multimodal communication: we need to bear in mind that while new media offer new ways of interacting and negotiating meaning, dealing with multiple modes as well as a new language at the same time may lead to overload for students;
  • collaboration: task design is essential to scaffolding collaboration, with different tools supporting collaborative learning in very different ways; there is also a need to make collaboration integral to course outcomes;
  • cultural and institutional issues: this includes the value placed on collaboration;
  • student/teacher roles: online environments can be democratic but students need to be autonomous learners to exploit this potential;
  • the development of community and social presence at a distance;
  • teacher training.

Intercultural interaction

Karin Vogt and Keiko Miyake, discussing “Telecollaborative learning with interaction journals”, showed the great potential for intercultural learning which is present in cross-cultural educational collaborations.  Their work showed that the greatest value could be drawn from such interactions by asking the students to keep detailed reflective journals, where intercultural themes and insights could emerge, and/or could be picked up and developed by the teacher.  They added that their own results, based on a content analysis of such journals from a German-Japanese intercultural email exchange programme, confirmed the results of previous studies that the teacher has a very demanding role in initiating, planning and monitoring intercultural learning.

Marie-Noëlle Lamy also stressed the intercultural angle in her paper “We Argentines are not as other people”, in which she explained her experience with designing an online course for Argentine teachers.  After explaining the teaching methodology and obstacles faced, she went on to argue that we are in need of a model of culture to use in researching courses such as this one – but not an essentialist model based on national boundaries.  She is currently addressing this important lack (something which Stephen Bax and I are also dealing with in our work on third spaces in online discussion) by developing a model of the formation of an online culture.

Teacher (and learner) training

In their paper “CALL strategy training for learners and teachers”, Howard Pomann and Phil Hubbard offered the following list of five principles to guide teachers in the area of CALL:

  • Experience CALL yourself (so teachers can understand what it feels like to be a student using this technology);
  • Give learners teacher training (so they know what teachers know about the goals and value of CALL);
  • Use a cyclical approach;
  • Use collaborative debriefings (to share reflections and insights);
  • Teach general exploitation strategies (so users can make the most of the technologies).

In conclusion, they found that learner strategy training was essential to maximise the benefits of CALL and could be achieved in part through the keeping of reflective journals (for example as blogs), which would form a basis for collaborative debriefings.  As in many other papers, it was stressed that teacher training should be very much a part of this process.

In presenting the work carried out so far by the US-based TESOL Technology Standards Taskforce, Phil Hubbard and Greg Kessler demonstrated the value of developing a set of broad, inclusive standards for teachers and students, concluding that:

  • bad teaching won’t disappear with the addition of technology;
  • good teaching can often be enhanced by the addition of technology;
  • the ultimate interpretation of the TESOL New Technology standards needs to be pedagogical, not technical.

In line with the views of many other presenters, Phil added that we need to stop churning out language teachers who learn about technology on the job; newer teachers need to acquire these skills on their pre-service and in-service education programmes.

Important warnings and caveats about technology use emerged in a session entitled “Moving learning materials from paper to online and beyond”, in which Thomas Robb, Toshiko Koyama and Judy Naguchi shared their experience of two projects in whose establishment Tom had acted as mentor.  While both projects were ultimately successful, Tom explained that mentoring at a distance is difficult, with face-to-face contact required from time to time, as a mentor can’t necessarily anticipate the knowledge gaps which may make some instructions unfathomable.  At the moment, it seems there is no easy way to move pre-existing paper-based materials online in anything other than a manual and time-consuming manner.  This may improve with time but until then we may still need to look to enthusiastic early adopters for guidance; technological innovation, he concluded, is not for the faint of heart and it may well be a slow process towards normalisation …

Normalisation, nevertheless, must be our goal, argued Stephen Bax in his plenary “Bridges, chopsticks and shoelaces”, in which he expanded on his well-known theory of normalisation.  Pointing out that there are different kinds of normalisation, ranging from the social and institutional to the individual, Stephen argued that:

A technology has arguably reached its fullest possible effectiveness only when it has arrived at the stage of ‘genesis amnesia’ (Bourdieu) or what I call ‘normalisation’.

Normalised technologies, he suggested, offer their users social and cultural capital, so that if students do not learn about technologies, they will be disadvantaged.  In other words, if teachers decide not to use technology because they personally don’t like it, they may be doing their students a great disservice in the long run.

At the same time, he stressed, it is important to remember that pedagogy and learners’ needs come first – technology must be the servant and not the master. Referring to the work of Kumaravadivelu and Tudor, he suggested that we must always respect context, with technology becoming part of a wider ecological approach to teaching.

There were interesting connections between the ecological approach proposed by Stephen and Gary Motteram’s thought-provoking paper, “Towards a cultural history of CALL”, in which he advocated the use of third generation activity theory to describe the overall interactions in CALL systems.  There was also a link with my own paper, “Four visions of CALL”, which argued for the expansion of our vision of technology in education to encompass not just technological and pedagogical issues, but also broader social and sociopolitical issues which have a bearing on this area.

Specific web 2.0 technologies

In “Learner training through online community”, Rachel Lange demonstrated a very successful discussion-board based venture at a college in the UAE, where, despite certain restrictions – such as the need to separate the genders in online forums – the students themselves have used the tools provided to build their own communities, where more advanced students mentor and support those with a lower level of English proficiency.

In Engaging collaborative writing through social networking, Vance Stevens and Nelba Quintana outlined their Writingmatrix project, designed to help students form online writing partnerships.  Operating within a larger context of paradigm shift – including pedagogy (didactic to constructivist), transfer (bringing social technologies from outside the classroom into the classroom), and trepidation (it’s OK not to know everything about technology and work it out in collaboration with your students) – they effectively illustrated the value of a range of aggregation tools to facilitate collaboration between educators and students; these included Technorati,, Crowd status, Twemes, FriendFeed, Dipity and Swurl.

Claire Kennedy and Mike Levy’s paper “Mobile learning for Italian” focused on the very successful use of mobile phone ‘push’ technology at Griffith University in Queensland.  In the context of a discussion of the horizontal and vertical integration of CALL, Mike commented on the irony that many teachers and schools break the horizontal continuity of technology use by insisting that mobile phones are switched off as soon as students arrive at school.  Potentially these are very valuable tools which, according to Mellow (2005), can be used in at least three ways:

  • push (where information is sent to students);
  • pull (where students request messages);
  • interactive (push & pull, including responses).

Despite some doubts in the literature about the invasion of students’ social spaces by push technologies, Mike and Claire showed that their programme of sending lexical and other language-related as well as cultural material to Italian students has been a resounding success, with extremely positive feedback overall.

Other successful demonstrations of technology being used in language classrooms ranged from Alex Ludewig’s presentation on “Enriching the students’ learning experience while ‘enriching’ the budget”, in which she showed the impressive multimedia work done by students of German in Simulation Builder, to Salomi  Papadima-Sophocleous’s work with “CALL e-portfolios”, where she showed the value of e-portfolios in preparing future EFL teachers as reflective, autonomous learners.

Beyond web 2.0 – to web 3.0?

As Trude Heift explained in her plenary, “Errors and intelligence in CALL”, CALL ranges from web 2.0 to speech technologies, virtual worlds, corpus studies, and ICALL.  While most of the current educational focus is on web 2.0, there are interesting developments in other areas.  It seems to me that, to the extent that web 3.0 involves the development of the intelligent web and/or the geospatial web, some of these developments may point the way to the emergence of web 3.0 applications in education.

Trude’s own paper focused on ICALL and natural language processing research, whose aim is to enable people to communicate with machines in natural language.  We have come a long way from the early Eliza programme to Intelliwise‘s web 3.0 conversational agent, which is capable of holding much more natural conversations.  While ICALL is still a young discipline and there are major challenges to be overcome in the processing of natural language – particularly the error-prone language of learners – it holds out the promise of automated systems which can create learner-centred, individualised learning environments thanks to modelling techniques which address learner variability and offer unique responses and interactions.  This is certainly an area to watch in years to come.

On a simpler level, text to speech and voice processing software is already being used in numerous classrooms around the world.   Ian Wilson, for example, presented an effective model of “Using Praat and Moodle for teaching segmental and suprasegmental pronunciation”.

Another topic raised in some papers was virtual worlds, which some would argue are incipient web 3.0 spaces.  Due to time limitations and timetable clashes, I didn’t catch these papers, but it’s certainly an area of growing interest – and in the final panel discussion, Ana Gimeno-Sanz, the President of EuroCALL, suggested that this might become a dominant theme at CALL conferences in the next year or so.

The final plenary panel summed up the key themes of the conference as follows:

  • the importance of pedagogy over technology (Osamu Takeuchi);
  • the need to consider differing contexts (OT);
  • the ongoing need for conferences like this one to consider best practice, even if the process of normalisation is proceeding apace (Thomas Robb);
  • the need to reach out to non-users of technology (TR);
  • the need for CALL representation in more general organisations (TR);
  • the professionalisation of CALL (Bob Fischer);
  • the need to consider psycholinguistic as well as sociolinguistic dimensions of CALL (BF);
  • the shift in focus from the technology (the means) to its application (the end) (Ana Gimeno-Sanz);
  • the need to extend our focus to under-served regions of the world (AG-S).

The last point was picked up on by numerous participants and a long discussion ensued on how to overcome the digital divide in its many aspects.  A desire to share the benefits of the technology was strongly expressed – both by those with technology to share and those who would like to share in that technology. That, I suspect, will be a major theme of our discussions in years to come: how to spread  pedagogically appropriate, contextually sensitive uses of technology to ever wider groups of teachers and learners.

Tag: WorldCALL08

Skip to toolbar