Asking the big questions around online learning

ICDE World Conference on Online Learning
Toronto, Canada
16-19 October 2017

New City Hall, Toronto, Canada

New City Hall, Toronto, Canada. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

The ICDE World Conference on Online Learning, focusing on the theme of “Teaching in a Digital Age – Re-thinking Teaching and Learning”, took place over four days in October, 2017. Like at other recent large technology conferences, it was interesting to see increasing recognition of the broader sociopolitical and sociocultural questions in which online learning is embedded, as reflected in many of the presentations. Papers were presented for the most part in groups of three or four under overarching strands. Short presentation times somewhat restricted the content that speakers were able to cover, but each set of papers was followed by audience discussion where key points could be elaborated on.

In his plenary presentation, edu@2035: Big shifts are coming, Richard Katz referred to Marshall McLuhan’s comment that “we march backwards into the future“, meaning that it is very difficult for us to predict the future without using the past as a framework. He went on to speak of Thomas Friedman’s framework for the future involving six core strategies – analyze, optimize, prophesize, customize, digitize and automize – in which, Katz suggested, all companies as well as all educational institutions need to be engaged. He suggested we may need to consider wild scenarios: could admission to colleges in the future be based not on performance tests but on genotyping? The gap between technology advancement and socialisation of technologies is widening, he stated.

As we look to the future, we have some choices in post-secondary education: avoid the topic; paralysis by analysis; choose mindful incrementalism; or invent a new future. To do the last of these, we need to take at least part of our attention off the rear view mirror. We need to construct scenarios, develop models, identify risks, and extract themes, and to present these ideas in short video formats that will be engaged with by today’s audiences. In short, we need to iterate, communicate and engage.

He mentioned William Gibson’s comment that “the future is already here, it’s just not very evenly distributed“, and a comment from Barry Chudakov (Sertain Research) that “algorithms are the new arbiters of human decision-making“. Evidence that the future is now can be found in various areas, from chatbots to the explosion of investment in cognitive systems and artificial intelligence (AI). Drawing on Pew Internet research, he suggested algorithms will continue to proliferate, with good things lying ahead. However, humanity and human judgement are lost when data and predictive modelling become paramount. Biases exist in algorithmically organised systems, and algorithmic categorisations deepen divides. Unemployment will rise. And the need is growing for algorithmic literacy, transparency and oversight.

He asked whether, by 2035, we can use new technologies and algorithms to personalise instruction in ways that both lower costs and foster engagement, persistence, retention and successful learning, possibly on a global scale? He concluded with a number of key points:

  • The robots are coming, aka change is inevitable;
  • Our mission must endure (even as our delivery and business models change);
  • While the past may be a prologue, there will be new winners and losers;
  • A future alma mater may be Apple, Google, Microsoft, Alibaba …;
    • scale is critical;
    • lifetime employability is critical;
    • students will determine winners and losers;
  • The future is already here, the question is whether we can face it;
    • ‘extreme planning’ must be practised;
  • Never discount post-secondary education.

In his plenary presentation, Reboot. Remake. Re-imagine, John Baker, the CEO of D2L, asked why it is that so many movie makers decide to re-imagine old movies? It’s because the original idea was great, but something has changed in the meantime, and a new direction is needed. Today’s political, science and environmental problems will ultimately be solved through education and its ripple effects, he suggested. In the current climate of rapid change, it is essential to focus not on remaking or rebooting, but rather on re-imagining the possible shape of education.

The technology must be about more than convenience; it must improve learning and increase engagement and satisfaction. Well-designed learning software can allow teachers to reach every student; what if there was no back of the classroom, he asked. It should be possible to reach remote learners or disabled learners or refugees or students using a range of devices from the brand new to hand-me-down technologies (hence the importance of responsive design).

We will soon see AI, machine learning, automation and adaptive learning becoming important; it is not just technology that is changing, but pedagogy. He cited an Oxford University study suggesting that 47% of all current jobs will cease to exist within two decades as a result of the advent of AI. The reality is that our skills development is not currently keeping pace with what will be needed in an AI-enabled future. Continuing educational opportunities for the workforce will be key here.

He suggested that the most important pedagogical innovation of the current era is competency-based education. In a discussion of its advantages, especially when accelerated by adaptive learning, he indicated that the greatest benefit is not so much the achievement of the competencies, but the leftover time and what can be done with it – could students learn more about other areas? Could they enrich their education through more research even at undergraduate level?

In response to an audience question, he also suggested that ‘learning management system’ (LMS) is an outdated term and ‘learning platform’ or ‘learning hub’ might be preferable. How do we capture and share the learning that is taking place across multiple platforms and spaces? It is vital that these systems should be porous, and interoperability between systems (e.g., through Learning Tools Interoperability [LTI] and Caliper) is essential.

In his plenary presentation, The future of learning management systems: Development, innovation and change, Phil Hill suggested that while there are many exciting educational technology developments, there is also a great deal of unhelpful hype about them. The steady, slower paced progress being made at institutions – for example in the introduction of online courses – is in many ways disconnected from the media and other hype. What is important is what can be done with asynchronous, individualised online education that cannot be done so easily in a plenary face-to-face classroom. Some of today’s most creative courses are bypassing LMSs in order to incorporate a wider range of platforms and tools.

Most institutions now have an LMS; these are seen as necessary but not exciting or dynamic. The core management system design is based on an old model. Some companies are trying to add innovative features, but it’s not clear how useful or effective some of these may be. (It may be that over time all ed tech platforms start adding in extra features which eventually make them look like badly-designed LMSs.) When LMSs first appeared, there were few competitors, but now there are many flexible platforms available, creating a demand that LMSs can replicate the same features. There is considerable frustration with LMSs, which are seen as much harder to use than the platforms and tools on the wider web.

He mentioned that in higher education Moodle is currently the LMS with the largest user base, while Canvas is the fastest growing LMS. At school level, Google Classroom, Schoology and Edmodo have some leverage, but they are less used in higher education. Many other platforms have attempted to enter the mainstream but have since disappeared. Overall, this is a fairly static market without many new successful entrants. The trend is towards having these systems hosted in the cloud; this may be the only choice with some LMSs, such as Canvas. While there is currently a lot of movement towards open education, in North America the installed base of LMSs is moving away from the main open source providers, Moodle and Sakai; similar trends are seen elsewhere. There is a certain perception that these look less professional than the proprietary systems. Open source is arguably not as important as it used to be; many educational institutions have moved away from their original concern not to be beholden to commercial providers, and are now focusing more on whether staff and students are happy with the system. Worldwide we’re seeing most institutions working with the same small number of LMSs: Canvas, D2L, Blackboard, Moodle and to some extent Sakai. We should consider the implications of this.

The question is how to resolve the tension between faculty desires to use the proliferating educational technologies which offer lots of flexible teaching and learning options, and institutional insistence that faculty make use of the LMS. Many people are saying that LMSs should go away, but in fact that’s not what we’re seeing happening. Opposition to LMSs is largely based on the fact that they function as walled gardens, which is how they were originally designed. In many cases, they have added poor versions of external software like blogs or social networks, and there has been an overall bloating of the systems.

What we’re seeing now is a breaking down of the walled garden model. The purpose of an LMS is coming to be seen as providing the basics, with gateways offered so that there are pathways to the use of external tools. It should be easy to access and integrate these external tools when faculty wish to use them. Interoperability of tools through systems like LTI, xAPI and Caliper is an important direction of development, though there is a need for these standards to evolve. They key point however is the acceptance by the industry that this is the direction of evolution. He concluded that there are three major trends in LMSs nowadays: cloud hosting; less cluttered, more intuitive designs; and an ability to integrate third-party software. Much of this change has been inspired by the Educause work on NGDLEs. There is a gradual move among LMS providers towards responsive designs so that LMSs can be used more effectively on mobile devices.

The strand Engaging online learners focused on improving learning outcomes through improving learner engagement. In their presentation, Engaging online students by fostering a community of practice, Robert Kershaw and Patricia Lychak explained their belief that if facilitators are engaged with developing their own competencies, then they will use these to engage students. Initially, a small number of workshops and informal support were provided for online facilitators in their institution; then a training specialist was brought in; and it was found through applied research that online facilitators wanted more development in student engagement, supporting student success, and technology use. A community of practice model with several stages has been developed: onboarding (involving web materials, a handbook, and a discussion forum) > community building (involving a discussion forum, webinars, and in-person events) > coaching (involving checking in with new facilitators, one-on-one support, and inquiries) > student success initiatives (involving early check-ins with students,  mid-term progress reports on students, and final grade entry) > training (shaped in part by feedback from the earlier stages; this also shapes the next onboarding phase). Lessons learned include:

  • introduce variety (delivery method, timing, detail level);
  • encourage sharing (best practices, student success stories, sample report comments);
  • promote efficiency (pre-populate templates, convert documents to PDF fillable forms, highlight LMS time-saving tools).

What the trainers try to do is to model for online facilitators what they can do for and with their students.

In his presentation, Chasing the dream: The engaged learner, Dan Piedra indicated that the tools we invest in can lock us into design mode templates. He quoted Sean Michael Morris: “today most students of online courses are more users than learners … the majority of online learning basically asks humans to behave like machines“. Drawing on Coates (2007), he suggested that engagement involves:

  • active learning;
  • collaborative learning;
  • participation in challenging academic activities;
  • formative communication with academic staff, and involvement in enriching educational experiences;
  • feeling legitimated/supported by learning communities;
  • work-integrated learning.

He showed a model being used at McMaster University involving the company Riipen, which places a student with a partner company that assesses students’ skills, after which the professor assigns a grade.

In her talk, A constructivist pedagogy: Shifting from active to engaged learning, Cheryl Oyer referred to Garrison, Anderson and Archer’s Community of Inquiry model involving cognitive, teaching and social presence. She mentioned a series of learner engagement strategies for nursing students: simulations, gamification, excursions, badges and portfolios.

The strand Online language learning focused on the many possibilities for promoting language learning through digital technologies. In his presentation, The language laboratory with a global reach, Michael Dabrowski talked about a Spanish OER Initiative at Athabasca University. The textbook was digitised, with the Moodle LMS being used as the publishing platform. Open technologies were used, including Google Maps (as a venue for students to conduct self-directed sociocultural investigations), Google Translate (as a dictionary, and a pronunciation and listening tool, which now also incorporates Word Lens for mobile translation), and Google Voice (the foundation for an objective open pronunciation tutor). With Google Translate, there are some risks including laziness with translation and uncritical acceptance of translations, but in fact it was found that students were noticing errors in Google’s translations.  With Google Voice, it is not a perfect pronunciation tutor; sometimes it is too critical, and sometimes too forgiving. Voice recognition by a computer is nevertheless a preferable form of feedback compared to learners’ own self-comparisons with language speakers heard in an audio laboratory; effectively it is possible to have a free open mobile language learning laboratory nowadays.

In her presentation, Open languages – Open online learning communities for better opportunities, Joerdis Weilandt described an open German learning course she has run on the free Finnish Eliademy platform. In setting up this course, it was important to transition from closed to open resources so they could be modified as needed. Interactive elements were added to the materials presented to students using the H5P software.

In their paper, Language learning MOOCs: Classifying approaches to learningMairéad Nic Giolla Mhichíl and Elaine Berne explained that there has been a significant increase in the availability of LMOOCs (language learning MOOCs). They were able to identify 105 LMOOCs in 2017, and used Grainne Conole’s 12-dimension MOOC classification to present an analysis of these (to be published in a forthcoming EuroCALL publication). They went on to speak about a MOOC they have created on the FutureLearn platform to promote the learning of Irish.

In his presentation, Online learning: The case of migrants learning French in Quebec, Simon Collin suggested that linguistic integration is important in supporting social and professional integration. This has traditionally been done face-to-face but increasingly it is being done online. Advantages of online courses for migrants fall into two major categories: they can anticipate their linguistic integration before arriving; and after migration, they can take online courses to facilitate a work-family-language learning balance. He described a questionnaire about perceptions of online learning answered by 1,361 adult migrants in Quebec. The common pattern was to take an online course before arrival, and then a face-to-face course after arrival. Respondents thought online courses were more helpful for developing reading and listening, but not as helpful for developing speaking skills.

The strand Leveraging learning analytics for students, faculty and institutions brought together papers focusing on the highly topical area of learning analytics. In their paper, Implementing learning analytics in higher education: A review of the literature and user perceptions, Nobuko Fujita and Ashlyne O’Neil indicated that there are benefits of learning analytics for educators in terms of improving courses and teaching, and for students in terms of improving their own learning. They reported on a study of perceptions of learning analytics by educators, administrators and students. Overall, there was a concern with the impact on students; the main concerns reported included profiling students, duty to respond, data security and consent.

In her presentation, An examination of active learning designs and the analytics of student engagement, Leni Casimiro indicated that active learning has four main components: experiential, interactive, reflective, and deep (higher-order). She reported on a study making use of learning analytics to determine to what extent students were in fact engaged in active learning. Descriptive analytics revealed that among the three courses examined, there was considerable variation in levels of activity; this was due to differences in student outcomes (tasks should help students focus rather than distracting them), teacher participation (teacher presence is essential), interactivity (teacher participation is important, as is the quality of questions), and the nature of students (asynchronous communication may be preferred by international students). Because of the weight given to teacher participation in active learning, it deserves special attention.

In his presentation, Formative analytics: Improving online course learning continuously, Shunping Wei explained that formative analytics are focused on supporting the learner to reflect on what has been learned, what can be improved, which goals can be achieved, and how to move forward. Formative analytics reports should be provided not only to management, but to teachers. It is possible to track whether students access all parts of an online course and whether they do so often, which would likely be signs of a good learner. It is also possible to create a radar map for a certain person or group, comparing their performance with the average.

The strand Mobile learning: Learning anytime, anywhere brought together a number of different perspectives on m-learning. In their presentation, Design principles for an innovative learner-determined mobile learning solution for adult literacy, Aga Palalas and Norine Wark spoke about their project focusing on a literacy uplift solution in the context of surprisingly low adult literacy rates in Canada. They have created a cross-platform mobile app for formal and informal learning incorporating gamification elements within a constructivist framework, but with more traditional behaviourist components as well. Based on data obtained in their study to date, key design themes and principles have been determined as follows:

  • mobility: design for the mobile learner;
  • learner-determined: respond to the learner;
  • context: integrate environmental influences.

Future plans include presentation of the pedagogical and technological principles and guidelines, and replication of the study in different contexts.

In her presentation, English to go: A critical analysis of apps for language learning, Heather Lotherington suggested that there is an element of technological determinism in mobile-assisted language learning (MALL). In MALL, there can be an app-only/content-oriented approach which gives you a course-in-a-box; or design-oriented learning which uses the affordances of mobile technologies in customised learning. Examining the most popular commercial language learning apps, she found that most were underpinned by ‘zombie pedagogies’ involving grammar-translation, audiolingualism, teaching by testing, drill and kill, decontextualised vocabulary, and so on. Ultimately, there were multiple flaws in theory, pedagogy, and practice. This led to failures from the point of  view of mobility (with a need to record language in a quiet room rather than in everyday settings), gamification, and language teaching (there was, for example, generally a 4-skills model of language learning, which is outdated in an era of multimodal communication). Companies are also gathering users’ data for their own purposes. It is essential, she concluded, that language teachers are involved in designing contemporary approaches to mobile language learning; and teachers should also be familiar with content-based apps so they can incorporate them strategically in design-based language teaching and avoid technological determinism. Later, in question time, she went on to suggest that what we are currently confronted with is a difficult interface between slow scholarship and fast marketing.

In her presentation, New delivery tool for mobile learning: WeChat for informal learning, Rongrong Fan explained that WeChat has taken over from QQ as the most popular messaging platform in China. WeChat incorporates instant messaging, official accounts, and ‘moments’ (this works on the same principle as sharing materials on Facebook). Some institutions are using official accounts which push learning material to students, which could be as little as a word a day; an advantage is that this can support bite-sized learning, but a disadvantage is that too many subscriptions can lead to ‘attention theft’. WeChat can be used for live broadcasting with low fees; this allows more direct interaction but the long-term learning effects and value are questionable. It is also possible to set up virtual learning communities in the form of WeChat groups; this can be motivating and help to overcome geographical barriers, but learners may not be making real progress if they are learning only from each other. She concluded that WeChat can be integrated into formal learning as a complementary platform; that use of WeChat could be incorporated in teacher training to give teachers more options for delivering their content; and that a strong learner support team is needed.

The strand Virtual reality and simulation in fact covered both virtual and augmented reality. In his presentation, Flipping a university for a global presence with mirrored reality, Michael Mathews spoke about the Global Learning Center at Oral Roberts University.  Augmented and virtual reality, he said, are additive to the experience that students receive, and can help us reach the highest level of Creating in Bloom’s Taxonomy. The concept of mirrored reality brings together augmented and virtual reality. These technologies can offer ways of reaching a diverse range of students scattered around the world.

In my own presentation, Taking learning to the streets: The potential and reality of mobile gamified language learning, which also formed part of this strand, I outlined the value of an augmented reality approach for helping students to engage with authentic language learning experiences in everyday life.

The strand Augmented reality: Aspects of use in education highlighted a range of contemporary uses of AR. In their talk, Distributed augmented reality training to develop skills at a distance, Mohamed Ally and Norine Wark described AR as an innovative solution to rapidly evolving learning needs. They spoke of their research on an industrial training package about valve repair and maintenance created by Scope AR and delivered onsite via iPads and AR glasses, for which they gathered data relating to the first three levels of the Kirkpatrick Model. The response to the AR training was overwhelmingly positive, with past hands-on training being seen as second-best, and computer-based training being least valued. It was felt that AR could replace lengthy training programmes. Scope AR has now developed a Remote AR collaboration tool which can be used to deliver support at a distance.  The presenters concluded by saying that AR could have many applications in distance education where the expert is in one location but can communicate at a distance to tutor or train someone in a different location.

In his presentation, Augmented reality and technical lab training using HoloLens, Angelo Cosco explained that skilled trades training can be created to be accessed via Microsoft’s HoloLens, allowing students to learn at their own pace, but also offering development opportunities for employees. Advantages include the fact that unlike with VR, there are no issues with nausea; users can wear the HoloLens and move around easily; and recordings can be made and sent immediately through wifi networks.

In their paper, Maximizing learner engagement and skill development through modelling, immersion and gameplay, Naza Djafarova and Leonora Zefi demonstrated a training game (though not an AR game per se) for community nurses in the Therapeutic Communication and Mental Health Assessment Demo video. The game is set up on a choose-your-own-adventure model, giving students a chance to practice what they have learned in a simulated and ‘safe’ environment, which is especially valuable given the lack of practicum placement positions available. Usability testing was conducted to identify benefits and determine possible future improvements. Students felt that the game helped them to build confidence and evoked empathy, and added that they were very engaged. They thought that the purpose of the resource should be explained up front, and requested more instructions on how to play the game, as well as an alternative to scoring. The research team’s current focus is on how to facilitate game design in multidisciplinary teams, and on examining linkages between learning objectives and game mechanics.

CN Tower, Toronto

CN Tower, Toronto. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

While many of the talks described above already began addressing the bigger philosophical issues around digital learning, there were also strands dedicated to these larger questions. The strand Ethical issues in online learning brought together presentations addressing a wide range of ethical issues connected with digital learning. In his presentation, Privacy-preserving learning analytics, Vassilios Verykios noted that we all create a unique social genome through the many activities we engage in online. There is now an unprecedented power to analyse big data for the benefit of society; there can be improvement in daily lives, and verification of research results and reductions in the costs of research projects, but strict privacy controls are needed. There are some regulatory frameworks already in place to protect data, including the US HIPAA and FERPA and the EU Data Protection Directive and General Data Protection Regulation (GDPR). The last of these deals with consent, data governance, auditing, and transparency regarding data breaches. There are data ownership issues, given that companies collect data for their own benefit; from a research perspective, it is important to remember that data is gathered by different bodies with their own ways of managing and storing it.

When it comes to educational data, technology now allows us to monitor students’ activities. Learning analytics is used to improve the educational system as a whole, but also to personalise the teaching of students. ‘Data protection’ involves protecting data so it cannot be accessed by intruders; and ‘data confidentiality’ means data can be accessed by legally authorised individuals. Data de-identification is a way of stripping out individually identifying characteristics from the data collected; one approach to anonymised data is known as k-anonymity. A fundamental challenge comes from the fact that when we anonymise data we do lose a lot of information, and it may somewhat change the statistics; so it is  necessary to find a balance between accessing useful data and protecting privacy.

In his presentation, The ethics of not contributing to digital addiction in a distance education environment, Brad Huddleston indicated that addiction takes place in the same part of the brain, regardless of what you are addicted to. Addiction is about going harder to generate larger quantities of dopamine to overcome the chemical barrier erected by the brain to deflect excessive amounts of dopamine. When it comes digital addiction, the symptoms are: anger when devices are taken away; anxiety disorders; and boredom (the last of these results from a lack of dopamine getting through the brain’s dopamine barrier). Studies have suggested, amongst other things, that computers do not necessarily improve education; that reading online is less effective than reading offline; and that one of the most popular educational games in the world, Minecraft, is also one of the most addictive.

There is a place, he stated, for the analogue to be re-integrated into education, though not to the exclusion of the digital. We should work within the limitations of the brain for each age group; that means less technology use at lower ages. We should teach students what mono-tasking or uni-tasking is about. We also need to understand, he said, that digital educational content is just as addictive as non-educational content.

In her presentation, ‘Troubling’ notions of inclusion and exclusion in open distance and e-learning: An exploration, Jeanette Botha mentioned that the divide between developed and developing nations is increasing, largely because of a lack of internet access in the latter. In the global north, there has traditionally been a concern with equity, participation and profit. In the global south, there has been more of an emphasis on social justice, equity and redress; social justice incorporates the notion of social inclusion. Inclusivity, she went on to say, now has a moral, and by extension, ethical imperative.

Since the Universal Declaration of Human Rights in 1948, there has been a focus on the inclusiveness of education. Open and distance learning are seen as a key social justice and inclusion instrument and avenue. However, we haven’t made the kind of progress that might have been expected. One reason is that context matters. Contextual barriers to inclusivity include:

  • technology (infrastructure and affordability);
  • quality (including accreditation, service and quality of the learning experience);
  • cultural and linguistic relevance and responsiveness;
  • perceived racial barriers;
  • ‘graduateness’ and employability of open and distance learning graduates;
  • confusion, conflation and fragmentation in the global open and distance learning environment.

In his presentation, Intercultural ethics: Which values to teach, learn and implement in online higher education and how?, Obiora Ike mentioned global challenges such as the rise of populism, economic and environmental problems, addictions, and issues of inclusivity. Culture matters, he argued, and from culture come values and ethics. Behaving in an ethical way engenders trust and promotes an ethical environment. Globethics.net, based in Geneva, has developed an online database of materials about ethics as well as a values framework. We must integrate ethics with all forms of education, he argued. This is a project being pursued for example through the Globethics Consortium, which focuses on ethics in higher education.

The strand Online learning and Indigenous peoples brought together papers on a variety of projects focused on Indigenous education through online tools. In the talk, A digital bundle – Indigenous knowledge on the internet: Emerging pedagogy, design, and expanding access, Jennifer Wemigwans suggested that respectful representations of knowledge online can be effective in helping others to access that knowledge. While it does not replace face-to-face transmission, cultural knowledge shared by elders online becomes accessible to those who might not otherwise have access to such knowledge, but who might wish to apply it in a range of contexts from the personal to the community sphere.

In her talk, Supporting new Indigenous postgraduate student transitions through online preparation tools, Lynne Petersen spoke about supporting Indigenous students through online tools in the Medical and Health Sciences Faculty at the University of Auckland in New Zealand. The work is framed theoretically by Indigenous research methodologies, transition pedagogies, and the role of technology and design in supporting empowerment (however, there are questions for Indigenous communities where face-to-face traditions are prevalent). There may be a disconnect between perceptions of academic or professional competency in the university system, and cultural knowledge and competency within an Indigenous community. Among the online tools created, a reflective questionnaire helps students think through the areas in which they are well-prepared, and the areas where they may need support. Future explorations will address why the tools seem to work well for Maori students, but not necessarily for Samoan or Tongan students, so it may be that as they stand these tools are not appropriate for all communities.

In the paper, Language integration through e-portfolio (LITE): A plurilingual e-learning approach combining Western and Indigenous perspectives, Aline Germain-Rutherford, Kris Johnston and Geoff Lawrence described a fusion of Western and Indigenous pedagogical perspectives. In a WordPress-based social space, each learner can trace their plurilingual journey covering the languages they speak, their daily linguistic encounters, and their cultural encounters. In another part of the website, students are directed to language exercises. After completing these, students can engage in a reflection covering questions relating to the four areas of mind, spirit, heart and body. Students can also respond to questions relating to the Common European Framework to build ‘radar charts’ reflecting their plurilingual, pluricultural identities. The fundamental aim of such an approach is to validate students’ plurilingual, pluricultural knowledge base.

Old City Hall, Toronto

Old City Hall, Toronto. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

Bringing together a wide range of academic and industry perspectives, this conference provided an important forum for discovering digital learning practices from around the globe, while simultaneously thinking through some of the big questions posed by new technologies.

DIGITAL LESSONS, LITERACIES & IDENTITIES

AILA World Congress
Rio de Janeiro, Brazil
23-28 July 2017

Praia da Barra da Tijuca, Rio de Janeiro, Brazil

Praia da Barra da Tijuca, Rio de Janeiro, Brazil. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

Having participated in the last two AILA World Congresses, in Beijing in 2011 and in Brisbane in 2014, I was delighted to be able to attend the 18th World Congress, taking place this time in the beautiful setting of Rio de Janeiro, Brazil. This year’s theme was “Innovations and Epistemological Challenges in Applied Linguistics”. As always, the conference brought together a large and diverse group of educators and researchers working in the broad field of applied linguistics, including many with an interest in digital and mobile learning, and digital literacies and identities. Papers ranged from the highly theoretical to the very applied, with some of the most interesting presentations actively seeking to build much-needed bridges between theory and practice.

In her presentation, E-portfolios: A tool for promoting learner autonomy?, Chung-Chien Karen Chang suggested that e-portfolios increase students’ motivation, promote different assessment criteria, encourage students to take charge of their learning, and stimulate their learning interests. Little (1991) looked at learner autonomy as a set of conditional freedoms: learners can determine their own objectives, define the content and process of their learning, select the desired methods and techniques, and monitor and evaluate their progress and achievements. Benson (1996) spoke of three interrelated levels of autonomy for language learners, involving the learning process, the resources, and the language. Benson and Voller (1997) emphasised four elements that help create a learning environment to cultivate learner autonomy, namely when learners can:

  • determine what to learn (within the scope of what teachers want them to learn);
  • acquire skills in self-directed learning;
  • exercise a sense of responsibility;
  • be given independent situations for further study.

Those who are intrinsically motivated are more self-regulated; in contrast, extrinsically motivated activities are less autonomous and more controlled. But either way, psychologically, students will be motivated to move forward.

The use of portfolios provides an alternative form of assessment. A portfolio can echo a process-oriented approach to writing. Within a multi-drafting process, students can check their own progress and develop a better understanding of their strengths and weaknesses. Portfolios offer multi-dimensional perspectives on student progress over time. The concept of e-portfolios is not yet fully fixed but includes the notion of collections of tools to perform operations with e-portfolio items, and collections of items for the purpose of demonstrating competence.

In a study with 40 sophomore and junior students, all students’ writing tasks were collected in e-portfolios constituting 75% of their grades. Many students agreed that writing helped improve their mastery of English, their critical thinking ability, their analytical skills, and their understanding of current events. They agreed that their instructor’s suggestions helped them improve their writing. Among the 40 students assessed on the LSRQ survey, the majority showed intrinsic motivation. Students indicated that the e-portfolios gave them a sense of freedom, and allowed them to  challenge and ultimately compete against themselves.

Gamification emerged as a strong conference theme. In her paper, Action research on the influence of gamification on learning IELTS writing skills, Michelle Ocriciano indicated that the aim of gamification, which has been appropriated by education from the fields of business and marketing, is to increase participation and motivation. Key ‘soft gamification’ elements include points, leaderboards and immediate feedback; while these do not constitute full gamification, they can nevertheless have benefits. She conducted action research to investigate the question: how can gamification apply to a Moodle setting to influence IELTS writing skills? She found that introducing gamification elements into Moodle – using tools such as GameSalad, Quizlet, ClassTools, Kahoot! and Quizizz – not only increased motivation but also improved students’ spelling, broadened their vocabulary, and decreased the time they needed for writing, leading to increases in their IELTS writing scores. To some extent, students were learning about exam wiseness. The most unexpected aspect was that her feedback as the teacher increased in effectiveness, because students shared her individual feedback with peers through a class WhatsApp group. In time, students also began creating their own games.

The symposium Researching digital games in language learning and teaching, chaired by Hayo Reinders and Sachiko Nakamura, naturally also brought gaming and gamification to the fore in a series of presentations.

In their presentation, Merging the formal and the informal: Language learning and game design, Leena Kuure, Salme Kälkäjä and Marjukka Käsmä reported on a game design course taught in a Finnish high school. Students would recruit their friends onto the course, and some even repeated the course for fun. It was found that the freedom given to students did not necessarily mean that they took more responsibility, but rather this varied from student to student. Indeed, the teacher had a different role for each student, taking or giving varying degrees of responsibility. Students chose to use Finnish or English, depending on the target groups for the games they were designing.

The presenters concluded that in a language course like this, language is not so much the object of study (where it is something ‘foreign’ to oneself) but rather it is a tool (where it is part of oneself, and part of an expressive repertoire). Formal vs informal, they said, seems to be an artificial distinction. The teacher’s role shifts, with implications for assessment, and a requirement for the teacher to have knowledge of individual students’ needs. The choice of project should support language choice; this enables authentic learning situations and, through these, ‘language as a tool’ thinking.

In her presentation, The role of digital games in English education in Japan: Insights from teachers and students, Louise Ohashi began by referencing the gaming principles outlined in the work of James Paul Gee. She reported on a study of students’ experiences of and attitudes to using digital games for English study, as well as teachers’ experiences and attitudes. She surveyed 102 Japanese university students, and 113 teachers from high schools and universities. Students, she suggested, are not as interested as teachers in distinguishing ‘real’ games from gamified learning tools.

While 31% of students had played digital games in English in class over the previous 12 months, 50% had done so outside class, suggesting a clear trend towards out-of-class gaming. The games they reported playing covered the spectrum from general commercial games to dedicated language learning or educational games. Far more students than teachers thought games were valuable aids to study inside and outside class, as well as for self-study. Only 30% of students said that they knew of appropriate games for their English level, suggesting an area where teachers might be able to intervene more.

In fact, most Japanese classrooms are quite traditional learning spaces – often with blackboards and wooden desks, and no wifi – which do not lend themselves to gaming in class. While some teachers use games, many avoid them. One teacher surveyed thought students wouldn’t be interested in games; another worked at a school where students were not allowed to use computers or phones; another thought the school and parents would disapprove; others emphasised the importance of a focus on academic coursework rather than gaming; and still others objected to the idea that foreign teachers in Japan are supposed to entertain students. She concluded that most students were interested in playing games but most teachers did not introduce them, by choice or otherwise, possibly representing a missed opportunity.

In her presentation, Technology in support of heritage language learning, Sabine Little reported on an online questionnaire with 112 respondents, examining how families from heritage language backgrounds use technology to support heritage language literacy development for their primary school students. Two thirds of the families spoke two or more heritage languages in the home. She found that where there were children of different ages, use of the heritage language would often decrease for younger children.

Parents were gatekeepers of both technology use and choices of apps; but many parents didn’t have the technological understanding to identify apps or games their children might be interested in. Many thought that there were no apps in their language. Some worried about health issues; others worried about cost. There are both advantages and disadvantages in language learning games; many of these have no cultural content as they’re designed to work with more than one language. Similarly, authentic language apps have both advantages (e.g., they feel less ‘educational’) and disadvantages (e.g., they may be too linguistically difficult). Nevertheless, many parents agreed that their children were interested in games for language learning, and more broadly in learning the heritage language.

All in all, this is an incredibly complex field. How children engage with heritage language resources is linked to their sense of identity as pluricultural individuals. Many parents are struggling with the ‘bad technology’/’good language learning opportunity’ dichotomy. In general, parents felt less confident about supporting heritage language literacy development through technology than through books.

In my own presentation, Designing for situated language and literacy: Learning through mobile augmented reality games and trails, I discussed the places where online gaming meets the offline world. I focused on mobile AR gamified learning trails, drawing on examples of recent, significant, informative projects from Singapore, Indonesia and Hong Kong. The aim of the presentation was to whet the appetite of the audience for the possibilities that emerge when we bring together online gaming, mobility, augmented reality, and language learning.

AR and big data were also important conference themes. In his paper, The internet of things: Implications for learning beyond the classroom, Hayo Reinders suggested that algorithmic approaches like Bayesian Networks, Nonnegative Matrix Factorization, Native Forests, and Association Rule Mining are beginning to help us make sense of vast amounts of data. Although they are not familiar to most of today’s teachers, they will be very familiar to future teachers. We are gradually moving from reactive to proactive systems, which can predict future problems in areas ranging from health to education. Current education is completely reactive; we wait for students to do poorly or fail before we intervene. Soon we will have the opportunity to change to predictive systems. All of this is enabled by the underpinning technologies becoming cheaper, smaller and more accessible.

He spoke about three key areas of mobility, ubiquity, and augmentation. Drawing on Klopfer et al (2002), he listed five characteristics of mobile technologies which could be turned into affordances for learning: portability; social interactivity; context sensitivity; connectivity; and individuality. These open up a spectrum of possibilities, he indicated, where the teacher’s responsibility is to push educational experiences towards the right-hand side of each pair:

  • Disorganised – Distributed
  • Unfocused – Collaborative
  • Inappropriate – Situated
  • Unmanageable – Networked
  • Misguided – Autonomous

Augmentation is about overlaying digital data, ranging from information to comments and opinions, on real-world settings. Users can add their own information to any physical environment. Such technologies allow learning to be removed from the physical constraints of the classroom.

With regard to ubiquity, when everything is connected to everything else, there is potentially an enormous amount of information generated. He described a wristband that records everything you do, 24/7, and forgets it after two minutes, unless you tap it twice to save what has been recorded and have it sent to your phone. Students can use this, for example, to save instances of key words or grammatical structures they encounter in everyday life. Characteristics of ubiquity that have educational implications include the following:

  • Permanency can allow always-on learning;
  • Accessibility can allow experiential learning;
  • Immediacy can allow incidental learning;
  • Interactivity can allow socially situated learning.

He went on to outline some key affordances of new technologies, linked to the internet of things, for learning:

  • Authentication for attendance when students enter the classroom;
  • Early identification and targeted support;
  • Adaptive and personalised learning;
  • Proactive and predictive rather than reactive management of learning;
  • Continuous learning experiences;
  • Informalisation;
  • Empowerment of students through access to their own data.

He wrapped up by talking about the Vital Project that gives students visualisation tools and analytics to monitor online language learning. Research has found that students like having access to this information, and having control over what information they see, and when. They want clear indications of progress, early alerts and recommendations for improvement. Cultural differences have also been uncovered in terms of the desire for comparison data; the Chinese students wanted to know how they were doing compared with the rest of the class and past cohorts, whereas non-Chinese did not.

There are many questions remaining about how we can best make use of this data, but it is already coming in a torrent. As educators, we need to think carefully about what data we are collecting, and what we can do with it. It is only us, not computer scientists, who can make the relevant pedagogical decisions.

In his paper, Theory ensembles in computer-assisted language learning research and practice, Phil Hubbard indicated that the concept of theory was formerly quite rigidly defined, and involved the notion of offering a full explanation for a phenomenon. It has now become a very fluid concept. Theory in CALL, he suggested, means the set of perspectives, models, frameworks, orientations, approaches, and specific theories that:

  • offer generalisations and insights to account for or provide greater understanding of phenomena related to the use of digital technology in the pursuit of language learning objectives;
  • ground and sustain relevant research agendas;
  • inform effective CALL design and teaching practice.

He presented a typology of theory use in CALL:

  • Atheoretical CALL: research and practice with no explicit theory stated (though there may be an implicit theory);
  • Theory borrowing: using a theory from SLA, etc, without change;
  • Theory instantiation: taking a general theory with a place for technology and/or SLA into consideration (e.g., activity theory);
  • Theory adaptation: changing one or more elements of a theory from SLA, etc, in anticipation of or in response to the impact of the technology;
  • Theory ensemble: combining multiple theoretical entities in a single study to capture a wider range of perspectives;
  • Theory synthesis: creating a new theory by integrating parts of existing ones;
  • Theory construction: creating a new theory specifically for some sub-domain of CALL;
  • Theory refinement: cycles of theory adjustment based on accumulated research findings.

He went on to provide some examples of research approaches based on theory ensembles. We’re just getting started in this area, and it needs further study and refinement. Theory ensembles seem to occur especially in CALL studies involving gaming, multimodality, and data-driven learning. Theory ensembles may be ‘layered’, with a broad theory providing an overarching approach of orientation, and complementary narrower theoretical entities providing focus. Similarly, members of a theory ensemble have different functions and therefore different weights in the overall picture. Some can be more central than others. A distinction might be made, he suggested, between one-time ensembles assembled for a given problem and context, and more stable ones that could lead to full theory syntheses. Finally, each ensemble member should have a clear function, and together they should lead to a richer and more informative analysis; researchers and designers should clearly justify the membership of ensembles, and reviewers should see that they do so.

Intercultural issues surfaced in many papers, perhaps most notably in the symposium Felt presence, imagined presence, hyper-presence in online intercultural encounters: Case studies and implications, chaired by Rick Kern and Christine Develotte. It was suggested by Rick Kern that people often imagine online communication is immediate, but in fact it is heavily technologically mediated, which has major implications for the nature of communication.

In their paper, Multimodality and social presence in an intercultural exchange setting, Meei-Ling Liaw and Paige Ware indicated that there is a lot of research on multimodality, communication differences, social presence and intercultural communication, but it is inconclusive and sometimes even contradictory. They drew on social presence theory, which postulates that a critical factor in the viability of a communication medium is the degree of social presence it affords.

They reported on a project involving 12 pre-service and 3 in-service teachers in Taiwan, along with 15 undergraduate Education majors in the USA. Participants were asked to use VoiceThread, which allows text, audio and video communication, and combinations of these. Communication was in English, and was asynchronous because of the time difference. It was found that the US students used video exclusively, but the Taiwanese used a mixture of modalities (text, audio and video). The US students found video easy to use, but some Taiwanese students worried about their oral skills and felt they could organise their thoughts better in text; however, other Taiwanese students wanted to practise their oral English. All partnerships involved a similar volume of words produced, perhaps indicating that the groups were mirroring each other. In terms of the types of questions posed, the Taiwanese asked far more questions about opinions; the American students were more cautious about asking such questions, and also knew little about Taiwan and so asked more factual questions. Overall, irrespective of the modality employed, the two groups of intercultural telecollaborative partners felt a strong sense of membership and thought that they had achieved a high quality of learning because of the online partnership.

As regards the pedagogical implications, students need to be exposed to the range of features available in order to maximise the affordances of all the multimodal choices. In addition to helping students consider how they convey a sense of social presence through the words and topics they choose, instructors need to attend to how social presence is intentionally or unintentionally communicated in the choice of modality. The issue of modality choice is also intimately connected to the power dynamic that can emerge when telecollaborative partnerships take place as monolingual exchanges.

In their paper, Conceptualizing participatory literacy: New approaches to sustaining co-presence in social and situated learning communities, Mirjam Hauck, Sylvie Warnecke and Muge Satar argued that teacher preparation needs to address technological and pedagogical issues, as well as sociopolitical and ecological embeddedness. Both participatory literacy and social presence are essential, and require multimodal competence. The challenge for educators in social networking environments is threefold: becoming multimodally aware and able to first establish their own social presence, and then successfully participating in the collaborative creation and sharing of knowledge, so that they are well-equipped to model such an ability and participatory skills for their students.

Digital literacy/multiliteracy in general, and participatory literacy in particular, is reflected in language learners’ ability to comfortably alternate in their roles as semiotic responders and semiotic initiators, and the degree to which they can make informed use of a variety of semiotic resources. The takeaway from this is that being multimodally able and as a result a skilled semiotic initiator and responder, and being able to establish social presence and participate online, is a precondition for computer-supported collaborative learning (CSCL) of languages and cultures.

They reported on a study with 36 pre-service English teachers learning to establish social presence through web 2.0 tools. Amongst other things, students were asked to reflect on their social presence in the form of a Glogster poster referring to Gilly Salmon’s animal metaphors for online participation (see p.12); students showed awareness that social presence is transient and emergent.

They concluded that educators need to be able to illustrate and model for their students the interdependence between being multimodally competent as reflected in informed semiotic activity, and the ability to establish social presence and display participatory literacy skills. Tasks like those in the training programme presented here, triggering ongoing reflection on the relevance of “symbolic competence” (Kramsch, 2006), social presence and participatory literacy, need to become part of CSCL-based teacher education.

In his presentation, Seeing and hearing apart: The dilemmas and possibilities of intersubjectivity in shared language classrooms, David Malinowski spoke about the use of high-definition video conferencing for synchronous class sessions in languages with small enrolments, working across US institutions.

It was found that technology presents an initial disruption which is overcome early in the semester, and does not prevent social cohesion. There is the ability to co-ordinate perspective-taking, dialogue, and actions with activity type and participation format. Synchronised performance, play and ritual may deserve special attention in addition to sequentially oriented events. History is made in the moment: durable learner identities inflect moment to moment, and there are variable engagements through and with technology. There are ongoing questions about parity of the educational experience in ‘sending’ and ‘receiving’ classrooms. Finally, there is a need to develop further tools to mediate the life-worlds of distance language learners across varying timescales.

Christo Redentor, Rio de Janeiro, Brazil

Christo Redentor, Rio de Janeiro, Brazil. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

There were many presentations that ranged well beyond CALL, and to some extent beyond educational technologies, but which nevertheless had considerable contextual relevance for those working in CALL and MALL, and e-learning and mobile learning more broadly.

The symposium Innovations and challenges in digital language practices and critical language/media awareness for the digital age, chaired by Jannis Androutsopoulos, consisted of a series of papers on the nature of digital communication, covering themes such as the link between language use and language ideology; multimodality; and the use of algorithms. One key question, it was suggested in the introduction, is how linguistic research might speak to language education.

In their presentation, Critical media awareness in a digital age, Caroline Tagg and Philipp Seargeant stated that people’s critical awareness develops fluidly and dynamically over time in response to experiences online. They introduced the concept of context design, which suggests that context is collaboratively co-constructed in interaction through linguistic choices. The concept draws on the well-known notion of context collapse, but suggests that offline contexts cannot simply move online and collapse; rather, contexts are always actively constructed, designed and redesigned. Context design incorporates the following elements:

  • Participants
  • Online media ideologies
  • Site affordances
  • Text type
  • Identification processes
  • Norms of communication
  • Goals

They reported on a study entitled Creating Facebook (2014-2016). Their interviews revealed complex understandings of Facebook as a communicative space and the importance of people’s ideas about social relationships. These understandings shaped behaviour in often unexpected ways, in processes that can be conceptualised as context design. They concluded that the role of people’s evolving language/media awareness in shaping online experiences needs to be taken into account by researchers wishing to effectively build a critical awareness for the digital age.

In her paper, Why are you texting me? Emergent communicative practices in spontaneous digital interactions, Maria Grazia Sindoni suggested that multimodality is a reaction against language-driven approaches that sideline resources other than language. However, language as a resource has been sidelined in mainstream multimodality research. Yet language still needs to be studied, but on a par with other semiotic resources.

In a study of reasons for mode-switching in online video conversations, she indicated that the technical possibility of doing something does not equate with the semiotic choice of doing so. In the case of communication between couples, she noted a pattern where intimate communications often involve a switch from speech to text. She also presented a case where written language was used to reinforce spoken language; written conventions can thus be creatively resemiotised.

There are several layers of meaning-making present in such examples: creative communicative functions in language use; the interplay of semiotic resources other than language that are co-deployed by users to adapt to web-mediated environments (e.g., the impossibility of perfectly reciprocating gaze, em-/disembodied interaction, staged proxemics, etc); different technical affordances (e.g., laptop vs smartphone); and different communicative purposes and degrees of socio-semiotic and intercultural awareness. She concluded with a critical agenda for research on web-mediated interaction, involving:

  • recognising the different levels (above) and their interplay;
  • encouraging critical awareness of video-specific patterns in syllabus design and teacher training;
  • promoting understanding of what can hinder or facilitate interaction (also in an intercultural light);
  • technical adaptivity vs semiotic awareness.

In their paper, Digital punctuation: Practices, reflexivity and enregistrement in the case of <.>, Jannis Androutsopoulos and Florian Busch referred to David Crystal’s view that in online communication the period has almost become an emoticon, one which is used to show irony or even aggression. They went on to say that the use of punctuation in contemporary online communication goes far beyond the syntactic meanings of traditional punctuation; punctuation and emoticons have become semiotic resources and work as contextualisation cues that index how a communication is to be understood. There is currently widespread media discussion of the use of punctuation, including specifically about the disappearance of the period. They distanced themselves from Crystal’s view of “linguistic free love” and the breaking of rules in the use of punctuation on the internet, suggesting that there are clear patterns emerging.

Reporting on a study of the use of punctuation in WhatsApp conversations by German students, they found relatively low use of the period. This suggests that periods are largely being omitted, and when they do occur, they generally do so within messages where they fulfil a syntactic function. They are very rare at the end of messages, where they may fulfil a semiotic function. For example, periods may be used for register switching, indicating a change to a more formal register; or to indicate unwillingness to participate in further conversation. Use of periods by one user may even be commented on by other users in a case of metapragmatic reflexivity. It was commented by interviewees that the use of periods at the end of messages is strange and annoying in the context of informal digital writing, especially as the WhatsApp bubbles already indicate the end of messages. One interviewee commented that the use of punctuation in general, and final periods in particular, can express annoyance and make a message appear harsher, signalling the bad mood of the writer. The presenters concluded that digital punctuation offers evidence of ongoing elaboration of new registers of writing in the early digital age.

In his presentation, The text is reading you: Language teaching in the age of the algorithm, Rodney Jones suggested that we should begin talking to students about digital texts by looking at simple examples like progress bars; as he explained, these do not represent the actual progress of software installation but are underpinned by an algorithm that is designed to be psychologically satisfying, thus revealing the disparity between the performative and the performance.

An interesting way to view algorithms is through the lens of performance. He reported on a study where his students identified and analysed the algorithms they encounter in their daily lives. He highlighted a number of key themes in our beliefs about algorithms:

  • Algorithmic Agency: ‘We sometimes believe the algorithm is like a person’; we may negotiate with the algorithm, changing our behaviour to alter the output of the algorithm
  • Algorithmic Authority (a term by Clay Shirky, who defines it as our tendency to believe algorithms more than people): ‘We sometimes believe that the algorithm is smarter than us’
  • Algorithm as Adversary: ‘We believe the algorithm is something we can cheat or hack’; this is seen in student strategies for altering TurnItIn scores, or in cases where people play off one dating app against another
  • Algorithm as Conversational Resource: ‘We think we can use algorithms to talk to others’; this can be seen for example when people tailor Spotify feeds to impress others and create common conversational interests
  • Algorithm as Audience: ‘We believe that algorithms are watching us’; this is the sense that we are performing for our algorithms, such as when students consider TurnItIn as their primary audience
  • Algorithm as Oracle: ‘We sometimes believe algorithms are magic’; this is seeing algorithms as fortune tellers or as able to reveal hidden truths, involving a kind of magical thinking

The real pleasure we find in algorithms is the sense that they really know us, but there is a lack of critical perspective and an overall capitulation to the logic of the algorithm, which is all about the monetisation of our data. There is no way we can really understand algorithms, but we can think critically about the role they play in our lives. He concluded with a quote from Ben Ratliff, a music critic at The New York Times: “Now the listener’s range of access is vast, and you, the listener, hold the power. But only if you listen better than you are being listened to”.

In her presentation, From hip-hop pedagogies to digital media pedagogies: Thinking about the cultural politics of communication, Ana Deumert discussed the privileging of face-to-face conversation in contemporary culture; a long conversation at a dinner party would be seen as a success, but a long conversation on social media would be seen as harmful, unhealthy, a sign of addiction, or at the very least a waste of time. Similarly, it is popularly believed that spending a whole day reading a book is good; but reading online for a whole day is seen as bad.

She asked what we can learn from critical hip-hop studies, which challenge discourses of school versus non-school learning. She also referred to Freire, who considered that schooling should establish a connection between learning in school and learning in everyday life outside school. New media, she noted, have offered opportunities to minorities, the disabled, and speakers of minority languages. If language is seen as free and creative, then it is possible to break out of current discourse structures. Like hip-hop pedagogies, new media pedagogies allow us to bring new perspectives into the classroom, and to address the tension between institutional and vernacular communicative norms through minoritised linguistic forms and resources. She went on to speak of Kenneth Goldsmith’s course Wasting Time on the Internet at the University of Pennsylvania (which led to Goldsmith’s book on the topic), where he sought to help people think differently about what is happening culturally when we ‘waste’ time online. However, despite Goldsmith’s comments to the contrary, she argued that online practices always have a political dimension. She concluded by suggesting that we need to rethink our ideologies of language and communication; to consider the semiotics and aesthetics of the digital; and to look at the interplay of power, practice and activism online.

Given the current global sociopolitical climate, it was perhaps unsurprising that the conference also featured a very timely strand on superdiversity. The symposium Innovations and challenges in language and superdiversity, chaired by Miguel Pérez-Milans, highlighted the important intersections between language, mobility, technology, and the ‘diversification of diversity’ that characterises increasing areas of contemporary life.

In his presentation, Engaging superdiversity – An empirical examination of its implications for language and identity, Massimiliano Spotti stressed the importance of superdiversity, but indicated that it is not a flawless concept. Since its original use in the UK context, the term has been taken up in many disciplines and used in different ways. Some have argued that it is theoretically empty (but maybe it is conceptually open?); that it is a banal revisitation of complexity theory (but their objects of enquiry differ profoundly); that it is naïve about inequality (but stratification and ethnocentric categories are heavily challenged in much of the superdiversity literature); that it lacks a historical perspective (he agreed with this); that it is neoliberal (the subject it produces is a subject that fits the neoliberal emphasis on lifelong learning); and that it is Eurocentric, racist and essentialist.

He went on to report on research he has been conducting in an asylum centre. Such an asylum seeking centre, he said, is effectively ‘the waiting room of globalisation’. Its guests are mobile people, and often people with a mobile. They may be long-term, short-term, transitory, high-skilled, low-skilled, highly educated, low-educated, and may be on complex trajectories. They are subject to high integration pressure from the institution. They have high insertional power in the marginal economies of society. Their sociolinguistic, ethnic, religious and educational backgrounds are not presupposable.

In his paper, ‘Sociolinguistic superdiversity’: Paradigm in search of explanation, or explanation in search of paradigm?, Stephen May went back to Vertovec’s 2007 work, focusing on the changing nature of migration in the UK; ethnicity was too limiting a focus to capture the differences of migrants, with many other variables needing to be taken into account. Vertovec was probably unaware, May suggested, of the degree of uptake the term ‘superdiversity’ would see across disciplines.

May spoke of his own use of the term ‘multilingual turn’, and referred to Blommaert’s emphasis on three key aspects of superdiversity, namely mobility, complexity and unpredictability. The new emphasis on superdiversity is broadly to be welcomed, he suggested, but there are limitations. He outlined four of these:

  • the unreflexive ethnocentrism of western sociolinguistics and its recent rediscovery of multilingualism as a central focus; this is linked to a ‘presentist’ view of multilingualism, with a lack of historical focus
  • the almost exclusive focus on multilingualism in urban contexts, constituting a kind of ‘metronormativity’ compared to ‘ossified’ rural/indigenous ‘languages’, with the former seen as contemporary and progressive, thus reinforcing the urban/rural divide
  • a privileging of individual linguistic agency over ongoing linguistic ‘hierarchies of prestige’ (Liddicoat, 2013)
  • an ongoing emphasising of parole over langue; this is still a dichotomy, albeit an inverted one, and pays insufficient attention to access to standard language practices; it is not clear how we might harness different repertoires within institutional educational practices

In response to such concerns, Blommaert (2015) has spoken about paradigmatic superdiversity, which allows us not only to focus on contemporary phenomena, but to revisit older data to see it in a new light. There are both epistemological and methodological implications, he went on to say. There is a danger, however, in a new orthodoxy which goes from ignoring multilingualism to fetishising or co-opting it. We also need to attend to our own positionality and the power dynamics involved in who is defining the field. We need to avoid superdiversity becoming a new (northern) hegemony.

In her paper, Superdiversity as reality and ideology, Ryuko Kubota echoed the comments of the previous speakers on human mobility, social complexity, and unpredictability, all of which are linked to linguistic variability. She suggested that superdiversity can be seen both as an embodiment of reality as well as an ideology.

Superdiversity, she said, signifies a multi/plural turn in applied linguistics. Criticisms include the fact that superdiversity is nothing extraordinary; many communities maintain homogeneity; linguistic boundaries may not be dismantled if analysis relies on existing linguistic units and concepts; and it may be a western-based construct with an elitist undertone. As such, superdiversity is an ideological construct. In neoliberal capitalism there is now a pushback against diversity, as seen in nationalism, protectionism and xenophobia. But there is also a complicity of superdiversity with neoliberal multiculturalism, which values diversity, flexibility and fluidity. Neoliberal workers’ experiences may be superdiverse or not so superdiverse; over and against linguistic diversity, there is a demand for English as an international language, courses in English, and monolingual approaches.

One emerging question is: do neoliberal corporate transnational workers engage in multilingual practices or rely solely on English as an international language? In a study of language choice in the workplace with Japanese and Korean transnational workers in manufacturing companies in non-English dominant countries, it was found that nearly all workers exhibited multilingual and multicultural consciousness. There was a valorisation of both English and a language mix in superdiverse contexts, as well as an understanding of the need to deal with different cultural practices. That said, most workers emphasised that overall, English is the most important language for business. Superdiversity may be a site where existing linguistic, cultural and other hierarchies are redefined and reinforced. Superdiversity in corporate settings exhibits contradictory ideas and trends.

In terms of neoliberal ideology, superdiversity, and the educational institution, she mentioned expectations such as the need to produce original research at a sustained pace; to conform to the conventional way of expressing ideas in academic discourse; and to submit to conventional assessment linked to neoliberal accountability. Consequences include a proliferation of trendy terms and publications; and little room for linguistic complexity, flexibility, and unpredictability. She went on to talk about who benefits from discussing superdiversity. Applied linguistics scholars are embedded in unequal relations of power. As theoretical concepts become fetishised, the theory serves mainly the interests of those who employ it, as noted by Anyon (1994). It is necessary for us to critically reflect, she said, on whether the popularity of superdiversity represents yet another example of concept fetishism.

In conclusion, she suggested that superdiversity should not merely be celebrated without taking into consideration historical continuity, socioeconomic inequalities created by global capitalism, and the enduring ideology of linguistic normativism. Research on superdiversity also requires close attention to the sociopolitical trend of increasing xenophobia, racism, and assimilationism. Ethically committed scholars, she said, must recognise the ideological nature of trendy concepts such as superdiversity, and explore ways in which sociolinguistic inquiries can actually help narrow racial, linguistic, economic and cultural gaps.

Rio de Janeiro viewed from Pão de Açúcar

Rio de Janeiro viewed from Pão de Açúcar. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

AILA 2017 wrapped up after a long and intensive week, with conversations to be continued online and offline until, three years from now, AILA 2020 takes place in Groningen in the Netherlands.

International connections

GloCALL
Hotel Ciputra, Jakarta, Indonesia, 8-9 November 2008

This year’s GloCALL Conference focused on Globalization and Localization in CALL, bringing together presenters and participants from a wide variety of countries to discuss their shared interest in the broad – and expanding – field of computer-assisted language learning. We spent two intensive days in the Hotel Ciputra, many floors above the busy, traffic-filled streets of the Indonesian capital, sharing international, national and local perspectives on technology-enhanced communication and collaboration, much of it facilitated by web 2.0 tools. Key themes included the fostering of collaboration and growth of community through CALL, and the vast range of CALL manifestations, each of which may be appropriate to different students indifferent contexts. There was a notable focus on the use of audio and/or video in conjunction with blogs, e-portfolios, digital storytelling, podcasting and m-learning.

Blogging was the focus of Penny Coutas’s session, Blogging for learning, teaching and researching languages, in which she demonstrated the principles behind blogging in an interactive paper-based exercise, before going on to outline the uses of blogs for learners, teachers and researchers. She stressed that the value of blogs lies as much in the interactions and community building that go on around them as it does in the actual blog postings themselves.

Podcasting was the focus of Wai Meng Chan’s plenary, Harnessing mobile technologies for foreign language learning: The example of podcasting. After reviewing the literature on podcasting, he described a research project conducted at NUS, which showed very positive overall student reactions to podcasting. He noted that podcasting can lead to a great variety of different kinds of language practice.

My own talk, entitled Web 2.0: Connecting the local and the global, discussed the ways in which a variety of web 2.0 tools, including blogs, wikis, rss, podcasting, vodcasting and virtual worlds, can be used to connect the local and the global as part of the language learning process. These tools can help students not only to learn language, but also to begin to develop the local and global linguistic affiliations which are so important for today’s citizens.

There is continued interest in the area of e-portfolios, complemented by rapidly growing interest in digital storytelling, as reflected in a number of talks and workshops. Debra Hoven, in a paper entitled Digital storytelling and eportfolios for language teaching and learning, spoke of digital stories, whether collaborative or individual, as a valuable mode of communication. She noted that digital stories can be used for reflection, sharing, presentation, showcasing knowledge or skills, and can even function as part of or in conjunction with e-portfolios. Typical goals may include improvement of L1 and L2 literacy as well as multiliteracy skills, (re-)connecting with family, culture and traditions, and intergenerational communication. They can be a means of expression, an avenue of creativity, a way to make the mainstream curriculum more meaningful, and can help L2 learners to find their own voices. They are, ultimately, about language for real purposes and real audiences, involving practice in the following areas:

  • writing/scripting (grammar, vocabulary, syntax, genre, register, audience, interest)
  • communicating a message
  • organising ideas

The notion of community was also stressed by Peter Gobel in his paper, Digital storytelling: Capturing experience and creating community. He described a pilot project conducted with Japanese learners of English from Kyoto University, who were asked to create digital stories about key experiences on overseas language learning trips from which they had recently returned.

A number of language areas were involved:

  • topic choice – focus
  • narrative awareness – voice and audience
  • organisational skill – expression of ideas
  • mixed media (created and found objects)

In addition, students required scaffolding in multimedia and digital composition skills. Overall benefits of the exercise included:

  • debriefing after the trip
  • creating a database (to be consulted by future students travelling overseas)
  • reflection on learning experiences
  • comparison and sharing of experiences
  • creating a social network of shared experiences

There is also continued and even growing interest in open source software such as Moodle (which was covered in a number of presentations) and Drupal, as well as other freeware which can be used in language teaching. John Brine, in a paper entitled English language support for a computer science course using FLAX and Moodle, outlined developments around the New Zealand Digital Library Project run by the University of Waikato, with particular focus on the Greenstone Digital Library and the FLAX (Flexible Language Acquisition) Project, which allows language exercises to be created based on freely available material drawn from web sources such as Wikipedia and the Humanity Development Library. There is now a prototype version of a FLAX module for Moodle, which allows students to collaborate on language exercises.

Phil Hubbard’s plenary focused on the need for Integrating learner training into CALL classrooms and materials. He argued that CALL can give students more control over – and thus more responsibility for – their own learning, but that they are generally not prepared to take on this responsibility and so need training in this area. Reiterating the learner training principles he outlined at WorldCALL 2008, he concluded that it is not just the technology that matters; nor is it just a case of how teachers use the technology; rather, it is important to train learners to use it effectively. In his paper, entitled An invitation to CALL: A guided tour of computer-assisted language learning, he introduced the online site which underpins his own teacher training course, An invitation to CALL.

In her plenary, Individuals, community, communication and language pedagogy: Emerging technologies that are shaping and are being shaped by our field, Debra Hoven suggested that rather than using multiple, slightly different terms to describe different aspects of language learning with technology, we should work with one main term (such as CALL) to maintain cohesion in the field. She went on to argue against chronological classifications of CALL which, she said, do not really capture what people are doing with the technology. She proposed her own six-part model to capture the main roles of CALL:

  1. Instructional/tutorial CALL (language classroom applications, sites such as Randall’s ESL Lab)
  2. Discovery/exploratory CALL (simulations, roleplays, webquests)
  3. Communications CALL (CMC involving language for real communication purposes)
  4. Social networked CALL (blogging, microblogging, photosharing, SNS and social bookmarking)
  5. Collaborative CALL (notably wikis)
  6. Narrative/reflective CALL (digital storytelling and e-portfolios)

It became apparent in a number of talks that, while educators around the world share similar interests and concerns with the use of technology, there are also important geographical differences. In his opening plenary, entitled CALL implementation in Indonesia – Yesterday, today and tomorrow, Indra Charismiadji explained that obstacles to use of recent educational technologies in Indonesia include technological issues such as lack of hardware, software and internet connectivity; policy issues such as governmental and institutional support for behaviourist pedagogical approaches; teachers’ resistance to change; and a general lack of computer literacy. Computer-based teaching (which fits with a transmission pedagogy where the teacher remains in control) may represent a first step towards broader adoption of more recent e-learning approaches and tools.

All in all, it was fascinating to compare CALL perspectives and experiences, noting some differences but also the considerable similarities in educators’ interests around the world.

Technology bridging the world

WorldCALL
Fukuoka International Congress Center, Fukuoka, Japan, 6-8 August 2008

The theme of WorldCALL 2008, the five-yearly conference now being held for the third time, was “CALL bridges the world”.  With participants from over 50 countries, and presentations on every aspect of language teaching through technology, it became a self-fulfilling prophecy.

Key themes

Key themes of the conference included the need for a sophisticated understanding of our technologies and their affordances; the importance of teacher involvement and task design in maximising collaboration and online community; the potential for intercultural interaction; the role of cultural and sociocultural issues; the need for reflection on the part of both teachers and students on all of the above; and, in particular, the need for much more extensive teacher training.

There was a wide swathe of technologies, tools and approaches covered, including:

  • email;
  • VLEs, in particular, Moodle;
  • web 2.0 tools, especially blogs and m-learning/mobile phones, but also microblogging, wikis, social networking, and VoIP/Skype;
  • borderline web 2.0/web 3.0 tools like virtual worlds and avatars;
  • ICALL, speech recognition and TTS software;
  • blended learning;
  • e-portfolios.

With up to 8 concurrent sessions running at any given moment, it was impossible to keep up with everything, but here’s a brief selection of themes and ideas …

Communication & collaboration

In her paper “Mediation, materiality and affordances”, Regine Hampel considered the contrasting views that the new media have the advantage of quantitatively increasing communication but the disadvantage of creating reduced-cue communication environments.  She concluded that there are many advantages to using computer-mediated communication with language learners, but that we need to focus on areas such as:

  • multimodal communication: we need to bear in mind that while new media offer new ways of interacting and negotiating meaning, dealing with multiple modes as well as a new language at the same time may lead to overload for students;
  • collaboration: task design is essential to scaffolding collaboration, with different tools supporting collaborative learning in very different ways; there is also a need to make collaboration integral to course outcomes;
  • cultural and institutional issues: this includes the value placed on collaboration;
  • student/teacher roles: online environments can be democratic but students need to be autonomous learners to exploit this potential;
  • the development of community and social presence at a distance;
  • teacher training.

Intercultural interaction

Karin Vogt and Keiko Miyake, discussing “Telecollaborative learning with interaction journals”, showed the great potential for intercultural learning which is present in cross-cultural educational collaborations.  Their work showed that the greatest value could be drawn from such interactions by asking the students to keep detailed reflective journals, where intercultural themes and insights could emerge, and/or could be picked up and developed by the teacher.  They added that their own results, based on a content analysis of such journals from a German-Japanese intercultural email exchange programme, confirmed the results of previous studies that the teacher has a very demanding role in initiating, planning and monitoring intercultural learning.

Marie-Noëlle Lamy also stressed the intercultural angle in her paper “We Argentines are not as other people”, in which she explained her experience with designing an online course for Argentine teachers.  After explaining the teaching methodology and obstacles faced, she went on to argue that we are in need of a model of culture to use in researching courses such as this one – but not an essentialist model based on national boundaries.  She is currently addressing this important lack (something which Stephen Bax and I are also dealing with in our work on third spaces in online discussion) by developing a model of the formation of an online culture.

Teacher (and learner) training

In their paper “CALL strategy training for learners and teachers”, Howard Pomann and Phil Hubbard offered the following list of five principles to guide teachers in the area of CALL:

  • Experience CALL yourself (so teachers can understand what it feels like to be a student using this technology);
  • Give learners teacher training (so they know what teachers know about the goals and value of CALL);
  • Use a cyclical approach;
  • Use collaborative debriefings (to share reflections and insights);
  • Teach general exploitation strategies (so users can make the most of the technologies).

In conclusion, they found that learner strategy training was essential to maximise the benefits of CALL and could be achieved in part through the keeping of reflective journals (for example as blogs), which would form a basis for collaborative debriefings.  As in many other papers, it was stressed that teacher training should be very much a part of this process.

In presenting the work carried out so far by the US-based TESOL Technology Standards Taskforce, Phil Hubbard and Greg Kessler demonstrated the value of developing a set of broad, inclusive standards for teachers and students, concluding that:

  • bad teaching won’t disappear with the addition of technology;
  • good teaching can often be enhanced by the addition of technology;
  • the ultimate interpretation of the TESOL New Technology standards needs to be pedagogical, not technical.

In line with the views of many other presenters, Phil added that we need to stop churning out language teachers who learn about technology on the job; newer teachers need to acquire these skills on their pre-service and in-service education programmes.

Important warnings and caveats about technology use emerged in a session entitled “Moving learning materials from paper to online and beyond”, in which Thomas Robb, Toshiko Koyama and Judy Naguchi shared their experience of two projects in whose establishment Tom had acted as mentor.  While both projects were ultimately successful, Tom explained that mentoring at a distance is difficult, with face-to-face contact required from time to time, as a mentor can’t necessarily anticipate the knowledge gaps which may make some instructions unfathomable.  At the moment, it seems there is no easy way to move pre-existing paper-based materials online in anything other than a manual and time-consuming manner.  This may improve with time but until then we may still need to look to enthusiastic early adopters for guidance; technological innovation, he concluded, is not for the faint of heart and it may well be a slow process towards normalisation …

Normalisation, nevertheless, must be our goal, argued Stephen Bax in his plenary “Bridges, chopsticks and shoelaces”, in which he expanded on his well-known theory of normalisation.  Pointing out that there are different kinds of normalisation, ranging from the social and institutional to the individual, Stephen argued that:

A technology has arguably reached its fullest possible effectiveness only when it has arrived at the stage of ‘genesis amnesia’ (Bourdieu) or what I call ‘normalisation’.

Normalised technologies, he suggested, offer their users social and cultural capital, so that if students do not learn about technologies, they will be disadvantaged.  In other words, if teachers decide not to use technology because they personally don’t like it, they may be doing their students a great disservice in the long run.

At the same time, he stressed, it is important to remember that pedagogy and learners’ needs come first – technology must be the servant and not the master. Referring to the work of Kumaravadivelu and Tudor, he suggested that we must always respect context, with technology becoming part of a wider ecological approach to teaching.

There were interesting connections between the ecological approach proposed by Stephen and Gary Motteram’s thought-provoking paper, “Towards a cultural history of CALL”, in which he advocated the use of third generation activity theory to describe the overall interactions in CALL systems.  There was also a link with my own paper, “Four visions of CALL”, which argued for the expansion of our vision of technology in education to encompass not just technological and pedagogical issues, but also broader social and sociopolitical issues which have a bearing on this area.

Specific web 2.0 technologies

In “Learner training through online community”, Rachel Lange demonstrated a very successful discussion-board based venture at a college in the UAE, where, despite certain restrictions – such as the need to separate the genders in online forums – the students themselves have used the tools provided to build their own communities, where more advanced students mentor and support those with a lower level of English proficiency.

In Engaging collaborative writing through social networking, Vance Stevens and Nelba Quintana outlined their Writingmatrix project, designed to help students form online writing partnerships.  Operating within a larger context of paradigm shift – including pedagogy (didactic to constructivist), transfer (bringing social technologies from outside the classroom into the classroom), and trepidation (it’s OK not to know everything about technology and work it out in collaboration with your students) – they effectively illustrated the value of a range of aggregation tools to facilitate collaboration between educators and students; these included Technorati, del.icio.us, Crowd status, Twemes, FriendFeed, Dipity and Swurl.

Claire Kennedy and Mike Levy’s paper “Mobile learning for Italian” focused on the very successful use of mobile phone ‘push’ technology at Griffith University in Queensland.  In the context of a discussion of the horizontal and vertical integration of CALL, Mike commented on the irony that many teachers and schools break the horizontal continuity of technology use by insisting that mobile phones are switched off as soon as students arrive at school.  Potentially these are very valuable tools which, according to Mellow (2005), can be used in at least three ways:

  • push (where information is sent to students);
  • pull (where students request messages);
  • interactive (push & pull, including responses).

Despite some doubts in the literature about the invasion of students’ social spaces by push technologies, Mike and Claire showed that their programme of sending lexical and other language-related as well as cultural material to Italian students has been a resounding success, with extremely positive feedback overall.

Other successful demonstrations of technology being used in language classrooms ranged from Alex Ludewig’s presentation on “Enriching the students’ learning experience while ‘enriching’ the budget”, in which she showed the impressive multimedia work done by students of German in Simulation Builder, to Salomi  Papadima-Sophocleous’s work with “CALL e-portfolios”, where she showed the value of e-portfolios in preparing future EFL teachers as reflective, autonomous learners.

Beyond web 2.0 – to web 3.0?

As Trude Heift explained in her plenary, “Errors and intelligence in CALL”, CALL ranges from web 2.0 to speech technologies, virtual worlds, corpus studies, and ICALL.  While most of the current educational focus is on web 2.0, there are interesting developments in other areas.  It seems to me that, to the extent that web 3.0 involves the development of the intelligent web and/or the geospatial web, some of these developments may point the way to the emergence of web 3.0 applications in education.

Trude’s own paper focused on ICALL and natural language processing research, whose aim is to enable people to communicate with machines in natural language.  We have come a long way from the early Eliza programme to Intelliwise‘s web 3.0 conversational agent, which is capable of holding much more natural conversations.  While ICALL is still a young discipline and there are major challenges to be overcome in the processing of natural language – particularly the error-prone language of learners – it holds out the promise of automated systems which can create learner-centred, individualised learning environments thanks to modelling techniques which address learner variability and offer unique responses and interactions.  This is certainly an area to watch in years to come.

On a simpler level, text to speech and voice processing software is already being used in numerous classrooms around the world.   Ian Wilson, for example, presented an effective model of “Using Praat and Moodle for teaching segmental and suprasegmental pronunciation”.

Another topic raised in some papers was virtual worlds, which some would argue are incipient web 3.0 spaces.  Due to time limitations and timetable clashes, I didn’t catch these papers, but it’s certainly an area of growing interest – and in the final panel discussion, Ana Gimeno-Sanz, the President of EuroCALL, suggested that this might become a dominant theme at CALL conferences in the next year or so.

The final plenary panel summed up the key themes of the conference as follows:

  • the importance of pedagogy over technology (Osamu Takeuchi);
  • the need to consider differing contexts (OT);
  • the ongoing need for conferences like this one to consider best practice, even if the process of normalisation is proceeding apace (Thomas Robb);
  • the need to reach out to non-users of technology (TR);
  • the need for CALL representation in more general organisations (TR);
  • the professionalisation of CALL (Bob Fischer);
  • the need to consider psycholinguistic as well as sociolinguistic dimensions of CALL (BF);
  • the shift in focus from the technology (the means) to its application (the end) (Ana Gimeno-Sanz);
  • the need to extend our focus to under-served regions of the world (AG-S).

The last point was picked up on by numerous participants and a long discussion ensued on how to overcome the digital divide in its many aspects.  A desire to share the benefits of the technology was strongly expressed – both by those with technology to share and those who would like to share in that technology. That, I suspect, will be a major theme of our discussions in years to come: how to spread  pedagogically appropriate, contextually sensitive uses of technology to ever wider groups of teachers and learners.

Tag: WorldCALL08

Skip to toolbar