Smart language learning

Liberty Square, Taipei, Taiwan

Liberty Square (自由廣場), Taipei, Taiwan. Photo by Mark Pegrum, 2019. May be reused under CC BY 4.0 licence.

PPTELL Conference
Taipei, Taiwan
3-5 July 2019

The second Pan-Pacific Technology-Enhanced Language Learning Conference took place over three days in midsummer in Taipei, with a focus on language learning within smart learning environments.

In his keynote, In a SMART world, why do we need language learning?, Robert Godwin-Jones spoke of visions of a world with universal machine translations; innovations in this area range from phone translators and Google Pixel Buds to devices like Pocketalk and Illi. But it’s time for a reality check, he suggested: it’s not transparent communication because you have to awkwardly foreground the device; there are practical issues with power and internet connections; and although the devices are capable with basic transactional language, the user remains on the outside of the language and the culture.

We are now seeing advances in AI thanks to deep learning and big data, including in areas such as voice recognition and voice synthesis, and we are seeing a proliferation of smart assistants and smart home devices; along with commercial efforts, there are efforts to create open source assistants. Siri and Google can operate in dozens of languages. Amazon’s Alexa now has nearly 100,000 ‘skills’ and users are being invited to add new languages. Smart assistants are already being used for language learning, for example for training pronunciation or conversational practice. We are gradually moving away from robotic voices thanks to devices such as Smartalk and Google Duplex; assistants such as the latter work within a limited domain, making the conversation easier to handle, but strategic competence is needed to avoid breakdowns in communication. Likely near-term developments include more improvements in natural language understanding, first in English, then other languages, and voice technology being built into ever more devices (with human-sounding voices raising questions of trust and authenticity). However, there are challenges because of the issues of:

  • cacophony (variations of standard usage, specialised vocabulary, L2 learners, the need for a vast and continuously updated database);
  • colour (idioms, non-verbal communication);
  • creativity (conventions may change depending on context, tone, individual idiosyncrasies);
  • culture (knowing grammar and vocabulary only gets you so far, as you need to be able to adapt to cultural scripts, and to develop pragmatic competencies);
  • codeswitching (frequent mixing of languages, especially online, in a world of linguistic superdiversity).

There is emerging evidence that young people are learning languages informally online, especially English, as they employ it for recreational and social purposes (see: Cole & Vanderplank, 2016). We may be moving towards a different conception of language relating to usage-based linguistics, which is about patterns rather than rules. It may call into question the accepted dogma of SLA (the noticing hypothesis, intentionality, etc) and the idea that learning comes from explicit instruction. However, there are caveats: most studies focus on English and on intermediate or advanced learners, who may not be reflecting much on their language learning.

The scenario we should promote is one where we blend formal and informal learning. For monolinguals and beginners, structure is helpful; for advanced learners, fine-tuning may be important. Teachers may model learner behaviour, and incorporating virtual exchange is easier when there is a framework. There are also issues with finding appropriate resources for a given individual learner. Some possible frameworks for thinking about this situation include:

  • structured unpredictability (teacher supplies structure; online resources supply unpredictability and digital literacy; students move from L2 learners to L2 users; a formal framework adds scope for reflection and intercultural awareness – Little & Thorne, 2017);
  • inverted pedagogy (teachers should be guides to what students are already learning outside class – Socket, 2014);
  • bridging activities (students act as ethnographers selecting content outside the classroom as they build interest, motivation and literacy – Thorne & Reinhardt, 2008);
  • global citizenship (students learn through direct contact and building critical language awareness through telecollaboration);
  • serendipitous learning (we should have a learner/teacher mindset everywhere; there is a major role for place-based learning and mobile companions using AR/VR/mixed reality – Vazquez, 2017).

Smart technology can help through big data and personalised learning, including language corpora. In the future, smart will get smarter, he suggested. More options will mean more complexity; the rise of smart tech + informal SLA = something new. There will be more variety of student starting points, identities, and resources; we could consider the perspective supplied by complexity theory here. We need to rethink some standard approaches in CALL research:

  • causality, going beyond studies of single variables;
  • individualisation, because one size doesn’t fit all;
  • description, not prediction;
  • assessment, which should be global and process-based in scope;
  • longitudinal approaches, picking up learning traces (see the keynote by Kinshuk, below).

A possible way forward for CALL research, he concluded, is indicated by Lee, Warschauer & Lee, 2019.

In his keynote, Smart learning approaches to improving language learning competencies, Kinshuk pointed out that education has become more inclusive, taking into account the needs of all students, and focusing on individual strengths and characteristics. There are various learning scenarios, both in class and outside class, which must be relevant to students’ living and work environments. There is a focus on authentic learning with physical and digital resources. The overall result is a better learning experience.

Learning should be omnipresent and highly contextual, he suggested. We need seamless learning integrated into every aspect of life; it should be immersive and always on; it should happen so naturally and in such small chunks that no conscious effort is needed to be actively engaged in it in everyday life. Technologies provide us with the means to realise this vision.

Smart learning analytics is helpful because it allows us to discover, analyse and make sense of student, instruction and environmental data from multiple sources to identify learning traces in order to facilitate instructional support in authentic learning environments. We require a past record and real-time observation in order to discover a learner’s capabilities, preferences and competencies; the learner’s location; the learner’s technology use; technologies surrounding the learner; and changes in the learner’s situational aspects. We analyse the learner’s actions and interactions with peers, instructors, physical objects and digital information; trends in the learner’s preferences; and changes in the learner’s skill and knowledge levels. Making sense is about finding learning traces, which he defined as follows: a learning trace comprises a network of observed study activities that lead to a measurable chunk of learning. Learning traces are ‘sensed’ and supply data for learning analytics, where data is typically big, un/semi-structured, seemingly unrelated, and not quite truthful (with possible gaps in data collection), and fits multiple models and theories.

In the smart language learning context, he mentioned a smart analytics tool called 21cListen, which allows learners to listen to different audio content and respond (e.g., identifying the main topic, linking essential pieces of information, locating important details, answering specific questions about the content, and paraphrasing their understanding), and analyses their level of listening comprehension depending on the nature and timing of their responses. Analytics does not replace the teacher, but gives the teacher more tools; and as teachers give feedback, the system learns from them and improves. Work is still underway on this project, with the eventual aim of producing a theory of listening skills. He went on to outline other tools taking a similar analytics approach to reading, speaking and writing.

In his keynote, Learning another first language with a robot ‘mother’ and IoT-based toys, Nian-Shing Chen spoke of the advantages of mixed-race babies growing up speaking two languages, a situation which could be mimicked with the use of a robot ‘mother’ speaking a language other than the baby’s mother tongue. This, he suggested, would help to solve L2 and FL learning difficulties indirectly but effectively. It would deal with issues of age (the need for extensive language exposure before the age of three), exposure (with children in language-rich households receiving up to 30 million words of input by age three), and real ‘human’ input (since when babies watch videos or listen to audio, they do not acquire language as they do from their mothers).

His design involves toys for cultivating the baby’s cognitive development, a robot for cultivating the baby’s language development, and the use of IoT sensors for the robot to be fully aware of the context, including the interaction situation and the surrounding environment. The 3Rs (critical factors for effective language learning design) are, he said, repetition, relevance and relationship. The idea is for the robot to interact with the baby through various toys. He is currently carrying out work on various types of robots: a facilitation robot, a 3D book playing robot, a storytelling robot, a Chinese classifiers learning robot, and a STEM and English learning robot.

NTNU Linkou Campus, Taipei, Taiwan

NTNU Linkou Campus (台師大·林口校區), Taipei, Taiwan. Photo by Mark Pegrum, 2019. May be reused under CC BY 4.0 licence.

In his presentation, Autonomous use of technology for learning English by Taiwanese students at different proficiency levels, Li-Tang Yu suggested that technology offers many opportunities for self-directed learning, which is important as students need to spend more time learning English outside of their regular classes. In his study, he found there was no significant difference between high and low proficiency English learners in terms of the amount of autonomous technology-enhanced learning they undertook. Most students in both groups mentioned engaging in receptive skills activities, but the high proficiency students engaged in more productive skills activities. Teachers should familiarise students with technology-enhanced materials for language learning, and recommend that they undertake more productive activities.

In her talk, Online revised explicit form-focused English pronunciation instruction in the exam-oriented context in China, Tian Jingxuan contrasted the traditional method of intuitive-imitative pronunciation instruction with newer and more effective form-focused instruction; in revised explicit form-focused instruction, there is a focus on both form and meaning practice. In her study, she contrasted traditional instruction (control group) with revised explicit form-focused instruction (experimental group, which also undertook after-class practice) in preparing students for the IELTS exam in China. Participants in the experimental group performed better in both the immediate and delayed post-test; she concluded that revised explicit form-focused instruction is more effective in preparing students for their exams, at  least in the case of the low-achieving students she studied.

In the paper, Investigating learners’ preferences for devices in mobile-assisted vocabulary learning, Tai-Yun Han and Chih-Cheng Lin reported on a study of the device preferences of 11th grade EFL students in Taiwan, based on past studies conducted by Glenn Stockwell in Japan. The most popular tool for completing vocabulary exercises was a mobile phone, followed by a desktop PC, laptop PC and tablet PC; students’ scores were similar, as was the amount of time required to complete the tasks. In general, students have high ownership of mobile phones and low availability of other devices (unlike the college students in Stockwell’s studies), and are accustomed to mobile lives.

In his paper, Perceptions, affordances, effectiveness and challenges of using a mobile messenger app for language learning, Daniel Chan spoke about the use of WhatsApp to support the teaching of French as a foreign language in Singapore. It has many features that are useful for language teaching, e.g., the recording of voice messages, the annotating of pictures, and the sharing of files. Some possibilities include:

  • teachers sharing announcements with students;
  • students sharing information with teachers;
  • sharing photos of work done in class;
  • sharing audio files;
  • correcting students’ texts by marking them up on WhatsApp.

In a survey, he found that many students were already using WhatsApp groups to support their studies, but without teachers present in those groups. Students’ perceptions of the use of WhatsApp for language learning (in a group including a teacher) were generally very positive; for example, they liked being able to clear up doubts immediately, engaging in collaborative and multimodal learning, and preserving traces of their learning. However, some found such a group too public, and much depends on the dynamics of groups; there is also a danger of message overload if students are offline for a while. Both teachers and students may feel under pressure to respond quickly at all times. In summary, despite some challenges, there is real potential in the use of WhatsApp for language learning, but its broader use will require a change of mindset on the part of teachers and students.

In their presentation, Does watching 360 degree virtual reality videos enhance Mandarin writing of Vietnamese students?, Thi Thu Tam Van and Yu-Ju Lan described a study in which students viewed photos (control group) or viewed 360 degree videos with Google Cardboard headsets (experimental group) before engaging in writing activities. Significant differences were found in all areas assessed (content, organisation, etc) and in overall performance; the authentic context provided by the 360 degree videos thus enhanced the level of students’ Mandarin writing. All students in the experimental group preferred using Google Cardboard compared to traditional methods in writing lessons.

In their paper, Discovering the effects of 3D immersive experience in enhancing oral communication of students in a college of medicine, Yi-Ju Ariel Wu and Yu-Ju Lan mentioned that 3D virtual worlds allow learners to immerse themselves fully and perform contextualised social interactions, while reducing their anxiety. The virtual world used was the Omni Immersion Vision Program from NTNU, Taiwan, and students engaged in a role-play about obesity (experimental group), while another group of students performed the role-play in a face-to-face classroom (control group). The experimental group created more scenes than the control group; used a wider range of objects; had richer communication, with the emergence of spontaneous talk; and their interaction was generally more fluid and imaginative. The experimental group said that using the virtual world reduced their fear of oral communication; made them more imaginative; and made oral communication more interesting.

On the final day of the conference, I had the honour of chairing a session comprising six short papers covering topics such as online feedback, differences in MALL between countries, the use of WeChat for intercultural learning, and location-based games. I wrapped up this session with my own presentation, Personalisation, collaboration and authenticity in mobile language learning, where I outlined some of the key principles to consider when designing mobile language and  literacy learning experiences for students.

Overall, the conference provided a good snapshot of current thinking about promoting language learning through smart technologies, an area whose potential is just beginning to unfold.

Asking the big questions around online learning

ICDE World Conference on Online Learning
Toronto, Canada
16-19 October 2017

New City Hall, Toronto, Canada

New City Hall, Toronto, Canada. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

The ICDE World Conference on Online Learning, focusing on the theme of “Teaching in a Digital Age – Re-thinking Teaching and Learning”, took place over four days in October, 2017. Like at other recent large technology conferences, it was interesting to see increasing recognition of the broader sociopolitical and sociocultural questions in which online learning is embedded, as reflected in many of the presentations. Papers were presented for the most part in groups of three or four under overarching strands. Short presentation times somewhat restricted the content that speakers were able to cover, but each set of papers was followed by audience discussion where key points could be elaborated on.

In his plenary presentation, edu@2035: Big shifts are coming, Richard Katz referred to Marshall McLuhan’s comment that “we march backwards into the future“, meaning that it is very difficult for us to predict the future without using the past as a framework. He went on to speak of Thomas Friedman’s framework for the future involving six core strategies – analyze, optimize, prophesize, customize, digitize and automize – in which, Katz suggested, all companies as well as all educational institutions need to be engaged. He suggested we may need to consider wild scenarios: could admission to colleges in the future be based not on performance tests but on genotyping? The gap between technology advancement and socialisation of technologies is widening, he stated.

As we look to the future, we have some choices in post-secondary education: avoid the topic; paralysis by analysis; choose mindful incrementalism; or invent a new future. To do the last of these, we need to take at least part of our attention off the rear view mirror. We need to construct scenarios, develop models, identify risks, and extract themes, and to present these ideas in short video formats that will be engaged with by today’s audiences. In short, we need to iterate, communicate and engage.

He mentioned William Gibson’s comment that “the future is already here, it’s just not very evenly distributed“, and a comment from Barry Chudakov (Sertain Research) that “algorithms are the new arbiters of human decision-making“. Evidence that the future is now can be found in various areas, from chatbots to the explosion of investment in cognitive systems and artificial intelligence (AI). Drawing on Pew Internet research, he suggested algorithms will continue to proliferate, with good things lying ahead. However, humanity and human judgement are lost when data and predictive modelling become paramount. Biases exist in algorithmically organised systems, and algorithmic categorisations deepen divides. Unemployment will rise. And the need is growing for algorithmic literacy, transparency and oversight.

He asked whether, by 2035, we can use new technologies and algorithms to personalise instruction in ways that both lower costs and foster engagement, persistence, retention and successful learning, possibly on a global scale? He concluded with a number of key points:

  • The robots are coming, aka change is inevitable;
  • Our mission must endure (even as our delivery and business models change);
  • While the past may be a prologue, there will be new winners and losers;
  • A future alma mater may be Apple, Google, Microsoft, Alibaba …;
    • scale is critical;
    • lifetime employability is critical;
    • students will determine winners and losers;
  • The future is already here, the question is whether we can face it;
    • ‘extreme planning’ must be practised;
  • Never discount post-secondary education.

In his plenary presentation, Reboot. Remake. Re-imagine, John Baker, the CEO of D2L, asked why it is that so many movie makers decide to re-imagine old movies? It’s because the original idea was great, but something has changed in the meantime, and a new direction is needed. Today’s political, science and environmental problems will ultimately be solved through education and its ripple effects, he suggested. In the current climate of rapid change, it is essential to focus not on remaking or rebooting, but rather on re-imagining the possible shape of education.

The technology must be about more than convenience; it must improve learning and increase engagement and satisfaction. Well-designed learning software can allow teachers to reach every student; what if there was no back of the classroom, he asked. It should be possible to reach remote learners or disabled learners or refugees or students using a range of devices from the brand new to hand-me-down technologies (hence the importance of responsive design).

We will soon see AI, machine learning, automation and adaptive learning becoming important; it is not just technology that is changing, but pedagogy. He cited an Oxford University study suggesting that 47% of all current jobs will cease to exist within two decades as a result of the advent of AI. The reality is that our skills development is not currently keeping pace with what will be needed in an AI-enabled future. Continuing educational opportunities for the workforce will be key here.

He suggested that the most important pedagogical innovation of the current era is competency-based education. In a discussion of its advantages, especially when accelerated by adaptive learning, he indicated that the greatest benefit is not so much the achievement of the competencies, but the leftover time and what can be done with it – could students learn more about other areas? Could they enrich their education through more research even at undergraduate level?

In response to an audience question, he also suggested that ‘learning management system’ (LMS) is an outdated term and ‘learning platform’ or ‘learning hub’ might be preferable. How do we capture and share the learning that is taking place across multiple platforms and spaces? It is vital that these systems should be porous, and interoperability between systems (e.g., through Learning Tools Interoperability [LTI] and Caliper) is essential.

In his plenary presentation, The future of learning management systems: Development, innovation and change, Phil Hill suggested that while there are many exciting educational technology developments, there is also a great deal of unhelpful hype about them. The steady, slower paced progress being made at institutions – for example in the introduction of online courses – is in many ways disconnected from the media and other hype. What is important is what can be done with asynchronous, individualised online education that cannot be done so easily in a plenary face-to-face classroom. Some of today’s most creative courses are bypassing LMSs in order to incorporate a wider range of platforms and tools.

Most institutions now have an LMS; these are seen as necessary but not exciting or dynamic. The core management system design is based on an old model. Some companies are trying to add innovative features, but it’s not clear how useful or effective some of these may be. (It may be that over time all ed tech platforms start adding in extra features which eventually make them look like badly-designed LMSs.) When LMSs first appeared, there were few competitors, but now there are many flexible platforms available, creating a demand that LMSs can replicate the same features. There is considerable frustration with LMSs, which are seen as much harder to use than the platforms and tools on the wider web.

He mentioned that in higher education Moodle is currently the LMS with the largest user base, while Canvas is the fastest growing LMS. At school level, Google Classroom, Schoology and Edmodo have some leverage, but they are less used in higher education. Many other platforms have attempted to enter the mainstream but have since disappeared. Overall, this is a fairly static market without many new successful entrants. The trend is towards having these systems hosted in the cloud; this may be the only choice with some LMSs, such as Canvas. While there is currently a lot of movement towards open education, in North America the installed base of LMSs is moving away from the main open source providers, Moodle and Sakai; similar trends are seen elsewhere. There is a certain perception that these look less professional than the proprietary systems. Open source is arguably not as important as it used to be; many educational institutions have moved away from their original concern not to be beholden to commercial providers, and are now focusing more on whether staff and students are happy with the system. Worldwide we’re seeing most institutions working with the same small number of LMSs: Canvas, D2L, Blackboard, Moodle and to some extent Sakai. We should consider the implications of this.

The question is how to resolve the tension between faculty desires to use the proliferating educational technologies which offer lots of flexible teaching and learning options, and institutional insistence that faculty make use of the LMS. Many people are saying that LMSs should go away, but in fact that’s not what we’re seeing happening. Opposition to LMSs is largely based on the fact that they function as walled gardens, which is how they were originally designed. In many cases, they have added poor versions of external software like blogs or social networks, and there has been an overall bloating of the systems.

What we’re seeing now is a breaking down of the walled garden model. The purpose of an LMS is coming to be seen as providing the basics, with gateways offered so that there are pathways to the use of external tools. It should be easy to access and integrate these external tools when faculty wish to use them. Interoperability of tools through systems like LTI, xAPI and Caliper is an important direction of development, though there is a need for these standards to evolve. They key point however is the acceptance by the industry that this is the direction of evolution. He concluded that there are three major trends in LMSs nowadays: cloud hosting; less cluttered, more intuitive designs; and an ability to integrate third-party software. Much of this change has been inspired by the Educause work on NGDLEs. There is a gradual move among LMS providers towards responsive designs so that LMSs can be used more effectively on mobile devices.

The strand Engaging online learners focused on improving learning outcomes through improving learner engagement. In their presentation, Engaging online students by fostering a community of practice, Robert Kershaw and Patricia Lychak explained their belief that if facilitators are engaged with developing their own competencies, then they will use these to engage students. Initially, a small number of workshops and informal support were provided for online facilitators in their institution; then a training specialist was brought in; and it was found through applied research that online facilitators wanted more development in student engagement, supporting student success, and technology use. A community of practice model with several stages has been developed: onboarding (involving web materials, a handbook, and a discussion forum) > community building (involving a discussion forum, webinars, and in-person events) > coaching (involving checking in with new facilitators, one-on-one support, and inquiries) > student success initiatives (involving early check-ins with students,  mid-term progress reports on students, and final grade entry) > training (shaped in part by feedback from the earlier stages; this also shapes the next onboarding phase). Lessons learned include:

  • introduce variety (delivery method, timing, detail level);
  • encourage sharing (best practices, student success stories, sample report comments);
  • promote efficiency (pre-populate templates, convert documents to PDF fillable forms, highlight LMS time-saving tools).

What the trainers try to do is to model for online facilitators what they can do for and with their students.

In his presentation, Chasing the dream: The engaged learner, Dan Piedra indicated that the tools we invest in can lock us into design mode templates. He quoted Sean Michael Morris: “today most students of online courses are more users than learners … the majority of online learning basically asks humans to behave like machines“. Drawing on Coates (2007), he suggested that engagement involves:

  • active learning;
  • collaborative learning;
  • participation in challenging academic activities;
  • formative communication with academic staff, and involvement in enriching educational experiences;
  • feeling legitimated/supported by learning communities;
  • work-integrated learning.

He showed a model being used at McMaster University involving the company Riipen, which places a student with a partner company that assesses students’ skills, after which the professor assigns a grade.

In her talk, A constructivist pedagogy: Shifting from active to engaged learning, Cheryl Oyer referred to Garrison, Anderson and Archer’s Community of Inquiry model involving cognitive, teaching and social presence. She mentioned a series of learner engagement strategies for nursing students: simulations, gamification, excursions, badges and portfolios.

The strand Online language learning focused on the many possibilities for promoting language learning through digital technologies. In his presentation, The language laboratory with a global reach, Michael Dabrowski talked about a Spanish OER Initiative at Athabasca University. The textbook was digitised, with the Moodle LMS being used as the publishing platform. Open technologies were used, including Google Maps (as a venue for students to conduct self-directed sociocultural investigations), Google Translate (as a dictionary, and a pronunciation and listening tool, which now also incorporates Word Lens for mobile translation), and Google Voice (the foundation for an objective open pronunciation tutor). With Google Translate, there are some risks including laziness with translation and uncritical acceptance of translations, but in fact it was found that students were noticing errors in Google’s translations.  With Google Voice, it is not a perfect pronunciation tutor; sometimes it is too critical, and sometimes too forgiving. Voice recognition by a computer is nevertheless a preferable form of feedback compared to learners’ own self-comparisons with language speakers heard in an audio laboratory; effectively it is possible to have a free open mobile language learning laboratory nowadays.

In her presentation, Open languages – Open online learning communities for better opportunities, Joerdis Weilandt described an open German learning course she has run on the free Finnish Eliademy platform. In setting up this course, it was important to transition from closed to open resources so they could be modified as needed. Interactive elements were added to the materials presented to students using the H5P software.

In their paper, Language learning MOOCs: Classifying approaches to learningMairéad Nic Giolla Mhichíl and Elaine Berne explained that there has been a significant increase in the availability of LMOOCs (language learning MOOCs). They were able to identify 105 LMOOCs in 2017, and used Grainne Conole’s 12-dimension MOOC classification to present an analysis of these (to be published in a forthcoming EuroCALL publication). They went on to speak about a MOOC they have created on the FutureLearn platform to promote the learning of Irish.

In his presentation, Online learning: The case of migrants learning French in Quebec, Simon Collin suggested that linguistic integration is important in supporting social and professional integration. This has traditionally been done face-to-face but increasingly it is being done online. Advantages of online courses for migrants fall into two major categories: they can anticipate their linguistic integration before arriving; and after migration, they can take online courses to facilitate a work-family-language learning balance. He described a questionnaire about perceptions of online learning answered by 1,361 adult migrants in Quebec. The common pattern was to take an online course before arrival, and then a face-to-face course after arrival. Respondents thought online courses were more helpful for developing reading and listening, but not as helpful for developing speaking skills.

The strand Leveraging learning analytics for students, faculty and institutions brought together papers focusing on the highly topical area of learning analytics. In their paper, Implementing learning analytics in higher education: A review of the literature and user perceptions, Nobuko Fujita and Ashlyne O’Neil indicated that there are benefits of learning analytics for educators in terms of improving courses and teaching, and for students in terms of improving their own learning. They reported on a study of perceptions of learning analytics by educators, administrators and students. Overall, there was a concern with the impact on students; the main concerns reported included profiling students, duty to respond, data security and consent.

In her presentation, An examination of active learning designs and the analytics of student engagement, Leni Casimiro indicated that active learning has four main components: experiential, interactive, reflective, and deep (higher-order). She reported on a study making use of learning analytics to determine to what extent students were in fact engaged in active learning. Descriptive analytics revealed that among the three courses examined, there was considerable variation in levels of activity; this was due to differences in student outcomes (tasks should help students focus rather than distracting them), teacher participation (teacher presence is essential), interactivity (teacher participation is important, as is the quality of questions), and the nature of students (asynchronous communication may be preferred by international students). Because of the weight given to teacher participation in active learning, it deserves special attention.

In his presentation, Formative analytics: Improving online course learning continuously, Shunping Wei explained that formative analytics are focused on supporting the learner to reflect on what has been learned, what can be improved, which goals can be achieved, and how to move forward. Formative analytics reports should be provided not only to management, but to teachers. It is possible to track whether students access all parts of an online course and whether they do so often, which would likely be signs of a good learner. It is also possible to create a radar map for a certain person or group, comparing their performance with the average.

The strand Mobile learning: Learning anytime, anywhere brought together a number of different perspectives on m-learning. In their presentation, Design principles for an innovative learner-determined mobile learning solution for adult literacy, Aga Palalas and Norine Wark spoke about their project focusing on a literacy uplift solution in the context of surprisingly low adult literacy rates in Canada. They have created a cross-platform mobile app for formal and informal learning incorporating gamification elements within a constructivist framework, but with more traditional behaviourist components as well. Based on data obtained in their study to date, key design themes and principles have been determined as follows:

  • mobility: design for the mobile learner;
  • learner-determined: respond to the learner;
  • context: integrate environmental influences.

Future plans include presentation of the pedagogical and technological principles and guidelines, and replication of the study in different contexts.

In her presentation, English to go: A critical analysis of apps for language learning, Heather Lotherington suggested that there is an element of technological determinism in mobile-assisted language learning (MALL). In MALL, there can be an app-only/content-oriented approach which gives you a course-in-a-box; or design-oriented learning which uses the affordances of mobile technologies in customised learning. Examining the most popular commercial language learning apps, she found that most were underpinned by ‘zombie pedagogies’ involving grammar-translation, audiolingualism, teaching by testing, drill and kill, decontextualised vocabulary, and so on. Ultimately, there were multiple flaws in theory, pedagogy, and practice. This led to failures from the point of  view of mobility (with a need to record language in a quiet room rather than in everyday settings), gamification, and language teaching (there was, for example, generally a 4-skills model of language learning, which is outdated in an era of multimodal communication). Companies are also gathering users’ data for their own purposes. It is essential, she concluded, that language teachers are involved in designing contemporary approaches to mobile language learning; and teachers should also be familiar with content-based apps so they can incorporate them strategically in design-based language teaching and avoid technological determinism. Later, in question time, she went on to suggest that what we are currently confronted with is a difficult interface between slow scholarship and fast marketing.

In her presentation, New delivery tool for mobile learning: WeChat for informal learning, Rongrong Fan explained that WeChat has taken over from QQ as the most popular messaging platform in China. WeChat incorporates instant messaging, official accounts, and ‘moments’ (this works on the same principle as sharing materials on Facebook). Some institutions are using official accounts which push learning material to students, which could be as little as a word a day; an advantage is that this can support bite-sized learning, but a disadvantage is that too many subscriptions can lead to ‘attention theft’. WeChat can be used for live broadcasting with low fees; this allows more direct interaction but the long-term learning effects and value are questionable. It is also possible to set up virtual learning communities in the form of WeChat groups; this can be motivating and help to overcome geographical barriers, but learners may not be making real progress if they are learning only from each other. She concluded that WeChat can be integrated into formal learning as a complementary platform; that use of WeChat could be incorporated in teacher training to give teachers more options for delivering their content; and that a strong learner support team is needed.

The strand Virtual reality and simulation in fact covered both virtual and augmented reality. In his presentation, Flipping a university for a global presence with mirrored reality, Michael Mathews spoke about the Global Learning Center at Oral Roberts University.  Augmented and virtual reality, he said, are additive to the experience that students receive, and can help us reach the highest level of Creating in Bloom’s Taxonomy. The concept of mirrored reality brings together augmented and virtual reality. These technologies can offer ways of reaching a diverse range of students scattered around the world.

In my own presentation, Taking learning to the streets: The potential and reality of mobile gamified language learning, which also formed part of this strand, I outlined the value of an augmented reality approach for helping students to engage with authentic language learning experiences in everyday life.

The strand Augmented reality: Aspects of use in education highlighted a range of contemporary uses of AR. In their talk, Distributed augmented reality training to develop skills at a distance, Mohamed Ally and Norine Wark described AR as an innovative solution to rapidly evolving learning needs. They spoke of their research on an industrial training package about valve repair and maintenance created by Scope AR and delivered onsite via iPads and AR glasses, for which they gathered data relating to the first three levels of the Kirkpatrick Model. The response to the AR training was overwhelmingly positive, with past hands-on training being seen as second-best, and computer-based training being least valued. It was felt that AR could replace lengthy training programmes. Scope AR has now developed a Remote AR collaboration tool which can be used to deliver support at a distance.  The presenters concluded by saying that AR could have many applications in distance education where the expert is in one location but can communicate at a distance to tutor or train someone in a different location.

In his presentation, Augmented reality and technical lab training using HoloLens, Angelo Cosco explained that skilled trades training can be created to be accessed via Microsoft’s HoloLens, allowing students to learn at their own pace, but also offering development opportunities for employees. Advantages include the fact that unlike with VR, there are no issues with nausea; users can wear the HoloLens and move around easily; and recordings can be made and sent immediately through wifi networks.

In their paper, Maximizing learner engagement and skill development through modelling, immersion and gameplay, Naza Djafarova and Leonora Zefi demonstrated a training game (though not an AR game per se) for community nurses in the Therapeutic Communication and Mental Health Assessment Demo video. The game is set up on a choose-your-own-adventure model, giving students a chance to practice what they have learned in a simulated and ‘safe’ environment, which is especially valuable given the lack of practicum placement positions available. Usability testing was conducted to identify benefits and determine possible future improvements. Students felt that the game helped them to build confidence and evoked empathy, and added that they were very engaged. They thought that the purpose of the resource should be explained up front, and requested more instructions on how to play the game, as well as an alternative to scoring. The research team’s current focus is on how to facilitate game design in multidisciplinary teams, and on examining linkages between learning objectives and game mechanics.

CN Tower, Toronto

CN Tower, Toronto. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

While many of the talks described above already began addressing the bigger philosophical issues around digital learning, there were also strands dedicated to these larger questions. The strand Ethical issues in online learning brought together presentations addressing a wide range of ethical issues connected with digital learning. In his presentation, Privacy-preserving learning analytics, Vassilios Verykios noted that we all create a unique social genome through the many activities we engage in online. There is now an unprecedented power to analyse big data for the benefit of society; there can be improvement in daily lives, and verification of research results and reductions in the costs of research projects, but strict privacy controls are needed. There are some regulatory frameworks already in place to protect data, including the US HIPAA and FERPA and the EU Data Protection Directive and General Data Protection Regulation (GDPR). The last of these deals with consent, data governance, auditing, and transparency regarding data breaches. There are data ownership issues, given that companies collect data for their own benefit; from a research perspective, it is important to remember that data is gathered by different bodies with their own ways of managing and storing it.

When it comes to educational data, technology now allows us to monitor students’ activities. Learning analytics is used to improve the educational system as a whole, but also to personalise the teaching of students. ‘Data protection’ involves protecting data so it cannot be accessed by intruders; and ‘data confidentiality’ means data can be accessed by legally authorised individuals. Data de-identification is a way of stripping out individually identifying characteristics from the data collected; one approach to anonymised data is known as k-anonymity. A fundamental challenge comes from the fact that when we anonymise data we do lose a lot of information, and it may somewhat change the statistics; so it is  necessary to find a balance between accessing useful data and protecting privacy.

In his presentation, The ethics of not contributing to digital addiction in a distance education environment, Brad Huddleston indicated that addiction takes place in the same part of the brain, regardless of what you are addicted to. Addiction is about going harder to generate larger quantities of dopamine to overcome the chemical barrier erected by the brain to deflect excessive amounts of dopamine. When it comes digital addiction, the symptoms are: anger when devices are taken away; anxiety disorders; and boredom (the last of these results from a lack of dopamine getting through the brain’s dopamine barrier). Studies have suggested, amongst other things, that computers do not necessarily improve education; that reading online is less effective than reading offline; and that one of the most popular educational games in the world, Minecraft, is also one of the most addictive.

There is a place, he stated, for the analogue to be re-integrated into education, though not to the exclusion of the digital. We should work within the limitations of the brain for each age group; that means less technology use at lower ages. We should teach students what mono-tasking or uni-tasking is about. We also need to understand, he said, that digital educational content is just as addictive as non-educational content.

In her presentation, ‘Troubling’ notions of inclusion and exclusion in open distance and e-learning: An exploration, Jeanette Botha mentioned that the divide between developed and developing nations is increasing, largely because of a lack of internet access in the latter. In the global north, there has traditionally been a concern with equity, participation and profit. In the global south, there has been more of an emphasis on social justice, equity and redress; social justice incorporates the notion of social inclusion. Inclusivity, she went on to say, now has a moral, and by extension, ethical imperative.

Since the Universal Declaration of Human Rights in 1948, there has been a focus on the inclusiveness of education. Open and distance learning are seen as a key social justice and inclusion instrument and avenue. However, we haven’t made the kind of progress that might have been expected. One reason is that context matters. Contextual barriers to inclusivity include:

  • technology (infrastructure and affordability);
  • quality (including accreditation, service and quality of the learning experience);
  • cultural and linguistic relevance and responsiveness;
  • perceived racial barriers;
  • ‘graduateness’ and employability of open and distance learning graduates;
  • confusion, conflation and fragmentation in the global open and distance learning environment.

In his presentation, Intercultural ethics: Which values to teach, learn and implement in online higher education and how?, Obiora Ike mentioned global challenges such as the rise of populism, economic and environmental problems, addictions, and issues of inclusivity. Culture matters, he argued, and from culture come values and ethics. Behaving in an ethical way engenders trust and promotes an ethical environment. Globethics.net, based in Geneva, has developed an online database of materials about ethics as well as a values framework. We must integrate ethics with all forms of education, he argued. This is a project being pursued for example through the Globethics Consortium, which focuses on ethics in higher education.

The strand Online learning and Indigenous peoples brought together papers on a variety of projects focused on Indigenous education through online tools. In the talk, A digital bundle – Indigenous knowledge on the internet: Emerging pedagogy, design, and expanding access, Jennifer Wemigwans suggested that respectful representations of knowledge online can be effective in helping others to access that knowledge. While it does not replace face-to-face transmission, cultural knowledge shared by elders online becomes accessible to those who might not otherwise have access to such knowledge, but who might wish to apply it in a range of contexts from the personal to the community sphere.

In her talk, Supporting new Indigenous postgraduate student transitions through online preparation tools, Lynne Petersen spoke about supporting Indigenous students through online tools in the Medical and Health Sciences Faculty at the University of Auckland in New Zealand. The work is framed theoretically by Indigenous research methodologies, transition pedagogies, and the role of technology and design in supporting empowerment (however, there are questions for Indigenous communities where face-to-face traditions are prevalent). There may be a disconnect between perceptions of academic or professional competency in the university system, and cultural knowledge and competency within an Indigenous community. Among the online tools created, a reflective questionnaire helps students think through the areas in which they are well-prepared, and the areas where they may need support. Future explorations will address why the tools seem to work well for Maori students, but not necessarily for Samoan or Tongan students, so it may be that as they stand these tools are not appropriate for all communities.

In the paper, Language integration through e-portfolio (LITE): A plurilingual e-learning approach combining Western and Indigenous perspectives, Aline Germain-Rutherford, Kris Johnston and Geoff Lawrence described a fusion of Western and Indigenous pedagogical perspectives. In a WordPress-based social space, each learner can trace their plurilingual journey covering the languages they speak, their daily linguistic encounters, and their cultural encounters. In another part of the website, students are directed to language exercises. After completing these, students can engage in a reflection covering questions relating to the four areas of mind, spirit, heart and body. Students can also respond to questions relating to the Common European Framework to build ‘radar charts’ reflecting their plurilingual, pluricultural identities. The fundamental aim of such an approach is to validate students’ plurilingual, pluricultural knowledge base.

Old City Hall, Toronto

Old City Hall, Toronto. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

Bringing together a wide range of academic and industry perspectives, this conference provided an important forum for discovering digital learning practices from around the globe, while simultaneously thinking through some of the big questions posed by new technologies.

DIGITAL LESSONS, LITERACIES & IDENTITIES

AILA World Congress
Rio de Janeiro, Brazil
23-28 July 2017

Praia da Barra da Tijuca, Rio de Janeiro, Brazil

Praia da Barra da Tijuca, Rio de Janeiro, Brazil. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

Having participated in the last two AILA World Congresses, in Beijing in 2011 and in Brisbane in 2014, I was delighted to be able to attend the 18th World Congress, taking place this time in the beautiful setting of Rio de Janeiro, Brazil. This year’s theme was “Innovations and Epistemological Challenges in Applied Linguistics”. As always, the conference brought together a large and diverse group of educators and researchers working in the broad field of applied linguistics, including many with an interest in digital and mobile learning, and digital literacies and identities. Papers ranged from the highly theoretical to the very applied, with some of the most interesting presentations actively seeking to build much-needed bridges between theory and practice.

In her presentation, E-portfolios: A tool for promoting learner autonomy?, Chung-Chien Karen Chang suggested that e-portfolios increase students’ motivation, promote different assessment criteria, encourage students to take charge of their learning, and stimulate their learning interests. Little (1991) looked at learner autonomy as a set of conditional freedoms: learners can determine their own objectives, define the content and process of their learning, select the desired methods and techniques, and monitor and evaluate their progress and achievements. Benson (1996) spoke of three interrelated levels of autonomy for language learners, involving the learning process, the resources, and the language. Benson and Voller (1997) emphasised four elements that help create a learning environment to cultivate learner autonomy, namely when learners can:

  • determine what to learn (within the scope of what teachers want them to learn);
  • acquire skills in self-directed learning;
  • exercise a sense of responsibility;
  • be given independent situations for further study.

Those who are intrinsically motivated are more self-regulated; in contrast, extrinsically motivated activities are less autonomous and more controlled. But either way, psychologically, students will be motivated to move forward.

The use of portfolios provides an alternative form of assessment. A portfolio can echo a process-oriented approach to writing. Within a multi-drafting process, students can check their own progress and develop a better understanding of their strengths and weaknesses. Portfolios offer multi-dimensional perspectives on student progress over time. The concept of e-portfolios is not yet fully fixed but includes the notion of collections of tools to perform operations with e-portfolio items, and collections of items for the purpose of demonstrating competence.

In a study with 40 sophomore and junior students, all students’ writing tasks were collected in e-portfolios constituting 75% of their grades. Many students agreed that writing helped improve their mastery of English, their critical thinking ability, their analytical skills, and their understanding of current events. They agreed that their instructor’s suggestions helped them improve their writing. Among the 40 students assessed on the LSRQ survey, the majority showed intrinsic motivation. Students indicated that the e-portfolios gave them a sense of freedom, and allowed them to  challenge and ultimately compete against themselves.

Gamification emerged as a strong conference theme. In her paper, Action research on the influence of gamification on learning IELTS writing skills, Michelle Ocriciano indicated that the aim of gamification, which has been appropriated by education from the fields of business and marketing, is to increase participation and motivation. Key ‘soft gamification’ elements include points, leaderboards and immediate feedback; while these do not constitute full gamification, they can nevertheless have benefits. She conducted action research to investigate the question: how can gamification apply to a Moodle setting to influence IELTS writing skills? She found that introducing gamification elements into Moodle – using tools such as GameSalad, Quizlet, ClassTools, Kahoot! and Quizizz – not only increased motivation but also improved students’ spelling, broadened their vocabulary, and decreased the time they needed for writing, leading to increases in their IELTS writing scores. To some extent, students were learning about exam wiseness. The most unexpected aspect was that her feedback as the teacher increased in effectiveness, because students shared her individual feedback with peers through a class WhatsApp group. In time, students also began creating their own games.

The symposium Researching digital games in language learning and teaching, chaired by Hayo Reinders and Sachiko Nakamura, naturally also brought gaming and gamification to the fore in a series of presentations.

In their presentation, Merging the formal and the informal: Language learning and game design, Leena Kuure, Salme Kälkäjä and Marjukka Käsmä reported on a game design course taught in a Finnish high school. Students would recruit their friends onto the course, and some even repeated the course for fun. It was found that the freedom given to students did not necessarily mean that they took more responsibility, but rather this varied from student to student. Indeed, the teacher had a different role for each student, taking or giving varying degrees of responsibility. Students chose to use Finnish or English, depending on the target groups for the games they were designing.

The presenters concluded that in a language course like this, language is not so much the object of study (where it is something ‘foreign’ to oneself) but rather it is a tool (where it is part of oneself, and part of an expressive repertoire). Formal vs informal, they said, seems to be an artificial distinction. The teacher’s role shifts, with implications for assessment, and a requirement for the teacher to have knowledge of individual students’ needs. The choice of project should support language choice; this enables authentic learning situations and, through these, ‘language as a tool’ thinking.

In her presentation, The role of digital games in English education in Japan: Insights from teachers and students, Louise Ohashi began by referencing the gaming principles outlined in the work of James Paul Gee. She reported on a study of students’ experiences of and attitudes to using digital games for English study, as well as teachers’ experiences and attitudes. She surveyed 102 Japanese university students, and 113 teachers from high schools and universities. Students, she suggested, are not as interested as teachers in distinguishing ‘real’ games from gamified learning tools.

While 31% of students had played digital games in English in class over the previous 12 months, 50% had done so outside class, suggesting a clear trend towards out-of-class gaming. The games they reported playing covered the spectrum from general commercial games to dedicated language learning or educational games. Far more students than teachers thought games were valuable aids to study inside and outside class, as well as for self-study. Only 30% of students said that they knew of appropriate games for their English level, suggesting an area where teachers might be able to intervene more.

In fact, most Japanese classrooms are quite traditional learning spaces – often with blackboards and wooden desks, and no wifi – which do not lend themselves to gaming in class. While some teachers use games, many avoid them. One teacher surveyed thought students wouldn’t be interested in games; another worked at a school where students were not allowed to use computers or phones; another thought the school and parents would disapprove; others emphasised the importance of a focus on academic coursework rather than gaming; and still others objected to the idea that foreign teachers in Japan are supposed to entertain students. She concluded that most students were interested in playing games but most teachers did not introduce them, by choice or otherwise, possibly representing a missed opportunity.

In her presentation, Technology in support of heritage language learning, Sabine Little reported on an online questionnaire with 112 respondents, examining how families from heritage language backgrounds use technology to support heritage language literacy development for their primary school students. Two thirds of the families spoke two or more heritage languages in the home. She found that where there were children of different ages, use of the heritage language would often decrease for younger children.

Parents were gatekeepers of both technology use and choices of apps; but many parents didn’t have the technological understanding to identify apps or games their children might be interested in. Many thought that there were no apps in their language. Some worried about health issues; others worried about cost. There are both advantages and disadvantages in language learning games; many of these have no cultural content as they’re designed to work with more than one language. Similarly, authentic language apps have both advantages (e.g., they feel less ‘educational’) and disadvantages (e.g., they may be too linguistically difficult). Nevertheless, many parents agreed that their children were interested in games for language learning, and more broadly in learning the heritage language.

All in all, this is an incredibly complex field. How children engage with heritage language resources is linked to their sense of identity as pluricultural individuals. Many parents are struggling with the ‘bad technology’/’good language learning opportunity’ dichotomy. In general, parents felt less confident about supporting heritage language literacy development through technology than through books.

In my own presentation, Designing for situated language and literacy: Learning through mobile augmented reality games and trails, I discussed the places where online gaming meets the offline world. I focused on mobile AR gamified learning trails, drawing on examples of recent, significant, informative projects from Singapore, Indonesia and Hong Kong. The aim of the presentation was to whet the appetite of the audience for the possibilities that emerge when we bring together online gaming, mobility, augmented reality, and language learning.

AR and big data were also important conference themes. In his paper, The internet of things: Implications for learning beyond the classroom, Hayo Reinders suggested that algorithmic approaches like Bayesian Networks, Nonnegative Matrix Factorization, Native Forests, and Association Rule Mining are beginning to help us make sense of vast amounts of data. Although they are not familiar to most of today’s teachers, they will be very familiar to future teachers. We are gradually moving from reactive to proactive systems, which can predict future problems in areas ranging from health to education. Current education is completely reactive; we wait for students to do poorly or fail before we intervene. Soon we will have the opportunity to change to predictive systems. All of this is enabled by the underpinning technologies becoming cheaper, smaller and more accessible.

He spoke about three key areas of mobility, ubiquity, and augmentation. Drawing on Klopfer et al (2002), he listed five characteristics of mobile technologies which could be turned into affordances for learning: portability; social interactivity; context sensitivity; connectivity; and individuality. These open up a spectrum of possibilities, he indicated, where the teacher’s responsibility is to push educational experiences towards the right-hand side of each pair:

  • Disorganised – Distributed
  • Unfocused – Collaborative
  • Inappropriate – Situated
  • Unmanageable – Networked
  • Misguided – Autonomous

Augmentation is about overlaying digital data, ranging from information to comments and opinions, on real-world settings. Users can add their own information to any physical environment. Such technologies allow learning to be removed from the physical constraints of the classroom.

With regard to ubiquity, when everything is connected to everything else, there is potentially an enormous amount of information generated. He described a wristband that records everything you do, 24/7, and forgets it after two minutes, unless you tap it twice to save what has been recorded and have it sent to your phone. Students can use this, for example, to save instances of key words or grammatical structures they encounter in everyday life. Characteristics of ubiquity that have educational implications include the following:

  • Permanency can allow always-on learning;
  • Accessibility can allow experiential learning;
  • Immediacy can allow incidental learning;
  • Interactivity can allow socially situated learning.

He went on to outline some key affordances of new technologies, linked to the internet of things, for learning:

  • Authentication for attendance when students enter the classroom;
  • Early identification and targeted support;
  • Adaptive and personalised learning;
  • Proactive and predictive rather than reactive management of learning;
  • Continuous learning experiences;
  • Informalisation;
  • Empowerment of students through access to their own data.

He wrapped up by talking about the Vital Project that gives students visualisation tools and analytics to monitor online language learning. Research has found that students like having access to this information, and having control over what information they see, and when. They want clear indications of progress, early alerts and recommendations for improvement. Cultural differences have also been uncovered in terms of the desire for comparison data; the Chinese students wanted to know how they were doing compared with the rest of the class and past cohorts, whereas non-Chinese did not.

There are many questions remaining about how we can best make use of this data, but it is already coming in a torrent. As educators, we need to think carefully about what data we are collecting, and what we can do with it. It is only us, not computer scientists, who can make the relevant pedagogical decisions.

In his paper, Theory ensembles in computer-assisted language learning research and practice, Phil Hubbard indicated that the concept of theory was formerly quite rigidly defined, and involved the notion of offering a full explanation for a phenomenon. It has now become a very fluid concept. Theory in CALL, he suggested, means the set of perspectives, models, frameworks, orientations, approaches, and specific theories that:

  • offer generalisations and insights to account for or provide greater understanding of phenomena related to the use of digital technology in the pursuit of language learning objectives;
  • ground and sustain relevant research agendas;
  • inform effective CALL design and teaching practice.

He presented a typology of theory use in CALL:

  • Atheoretical CALL: research and practice with no explicit theory stated (though there may be an implicit theory);
  • Theory borrowing: using a theory from SLA, etc, without change;
  • Theory instantiation: taking a general theory with a place for technology and/or SLA into consideration (e.g., activity theory);
  • Theory adaptation: changing one or more elements of a theory from SLA, etc, in anticipation of or in response to the impact of the technology;
  • Theory ensemble: combining multiple theoretical entities in a single study to capture a wider range of perspectives;
  • Theory synthesis: creating a new theory by integrating parts of existing ones;
  • Theory construction: creating a new theory specifically for some sub-domain of CALL;
  • Theory refinement: cycles of theory adjustment based on accumulated research findings.

He went on to provide some examples of research approaches based on theory ensembles. We’re just getting started in this area, and it needs further study and refinement. Theory ensembles seem to occur especially in CALL studies involving gaming, multimodality, and data-driven learning. Theory ensembles may be ‘layered’, with a broad theory providing an overarching approach of orientation, and complementary narrower theoretical entities providing focus. Similarly, members of a theory ensemble have different functions and therefore different weights in the overall picture. Some can be more central than others. A distinction might be made, he suggested, between one-time ensembles assembled for a given problem and context, and more stable ones that could lead to full theory syntheses. Finally, each ensemble member should have a clear function, and together they should lead to a richer and more informative analysis; researchers and designers should clearly justify the membership of ensembles, and reviewers should see that they do so.

Intercultural issues surfaced in many papers, perhaps most notably in the symposium Felt presence, imagined presence, hyper-presence in online intercultural encounters: Case studies and implications, chaired by Rick Kern and Christine Develotte. It was suggested by Rick Kern that people often imagine online communication is immediate, but in fact it is heavily technologically mediated, which has major implications for the nature of communication.

In their paper, Multimodality and social presence in an intercultural exchange setting, Meei-Ling Liaw and Paige Ware indicated that there is a lot of research on multimodality, communication differences, social presence and intercultural communication, but it is inconclusive and sometimes even contradictory. They drew on social presence theory, which postulates that a critical factor in the viability of a communication medium is the degree of social presence it affords.

They reported on a project involving 12 pre-service and 3 in-service teachers in Taiwan, along with 15 undergraduate Education majors in the USA. Participants were asked to use VoiceThread, which allows text, audio and video communication, and combinations of these. Communication was in English, and was asynchronous because of the time difference. It was found that the US students used video exclusively, but the Taiwanese used a mixture of modalities (text, audio and video). The US students found video easy to use, but some Taiwanese students worried about their oral skills and felt they could organise their thoughts better in text; however, other Taiwanese students wanted to practise their oral English. All partnerships involved a similar volume of words produced, perhaps indicating that the groups were mirroring each other. In terms of the types of questions posed, the Taiwanese asked far more questions about opinions; the American students were more cautious about asking such questions, and also knew little about Taiwan and so asked more factual questions. Overall, irrespective of the modality employed, the two groups of intercultural telecollaborative partners felt a strong sense of membership and thought that they had achieved a high quality of learning because of the online partnership.

As regards the pedagogical implications, students need to be exposed to the range of features available in order to maximise the affordances of all the multimodal choices. In addition to helping students consider how they convey a sense of social presence through the words and topics they choose, instructors need to attend to how social presence is intentionally or unintentionally communicated in the choice of modality. The issue of modality choice is also intimately connected to the power dynamic that can emerge when telecollaborative partnerships take place as monolingual exchanges.

In their paper, Conceptualizing participatory literacy: New approaches to sustaining co-presence in social and situated learning communities, Mirjam Hauck, Sylvie Warnecke and Muge Satar argued that teacher preparation needs to address technological and pedagogical issues, as well as sociopolitical and ecological embeddedness. Both participatory literacy and social presence are essential, and require multimodal competence. The challenge for educators in social networking environments is threefold: becoming multimodally aware and able to first establish their own social presence, and then successfully participating in the collaborative creation and sharing of knowledge, so that they are well-equipped to model such an ability and participatory skills for their students.

Digital literacy/multiliteracy in general, and participatory literacy in particular, is reflected in language learners’ ability to comfortably alternate in their roles as semiotic responders and semiotic initiators, and the degree to which they can make informed use of a variety of semiotic resources. The takeaway from this is that being multimodally able and as a result a skilled semiotic initiator and responder, and being able to establish social presence and participate online, is a precondition for computer-supported collaborative learning (CSCL) of languages and cultures.

They reported on a study with 36 pre-service English teachers learning to establish social presence through web 2.0 tools. Amongst other things, students were asked to reflect on their social presence in the form of a Glogster poster referring to Gilly Salmon’s animal metaphors for online participation (see p.12); students showed awareness that social presence is transient and emergent.

They concluded that educators need to be able to illustrate and model for their students the interdependence between being multimodally competent as reflected in informed semiotic activity, and the ability to establish social presence and display participatory literacy skills. Tasks like those in the training programme presented here, triggering ongoing reflection on the relevance of “symbolic competence” (Kramsch, 2006), social presence and participatory literacy, need to become part of CSCL-based teacher education.

In his presentation, Seeing and hearing apart: The dilemmas and possibilities of intersubjectivity in shared language classrooms, David Malinowski spoke about the use of high-definition video conferencing for synchronous class sessions in languages with small enrolments, working across US institutions.

It was found that technology presents an initial disruption which is overcome early in the semester, and does not prevent social cohesion. There is the ability to co-ordinate perspective-taking, dialogue, and actions with activity type and participation format. Synchronised performance, play and ritual may deserve special attention in addition to sequentially oriented events. History is made in the moment: durable learner identities inflect moment to moment, and there are variable engagements through and with technology. There are ongoing questions about parity of the educational experience in ‘sending’ and ‘receiving’ classrooms. Finally, there is a need to develop further tools to mediate the life-worlds of distance language learners across varying timescales.

Christo Redentor, Rio de Janeiro, Brazil

Christo Redentor, Rio de Janeiro, Brazil. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

There were many presentations that ranged well beyond CALL, and to some extent beyond educational technologies, but which nevertheless had considerable contextual relevance for those working in CALL and MALL, and e-learning and mobile learning more broadly.

The symposium Innovations and challenges in digital language practices and critical language/media awareness for the digital age, chaired by Jannis Androutsopoulos, consisted of a series of papers on the nature of digital communication, covering themes such as the link between language use and language ideology; multimodality; and the use of algorithms. One key question, it was suggested in the introduction, is how linguistic research might speak to language education.

In their presentation, Critical media awareness in a digital age, Caroline Tagg and Philipp Seargeant stated that people’s critical awareness develops fluidly and dynamically over time in response to experiences online. They introduced the concept of context design, which suggests that context is collaboratively co-constructed in interaction through linguistic choices. The concept draws on the well-known notion of context collapse, but suggests that offline contexts cannot simply move online and collapse; rather, contexts are always actively constructed, designed and redesigned. Context design incorporates the following elements:

  • Participants
  • Online media ideologies
  • Site affordances
  • Text type
  • Identification processes
  • Norms of communication
  • Goals

They reported on a study entitled Creating Facebook (2014-2016). Their interviews revealed complex understandings of Facebook as a communicative space and the importance of people’s ideas about social relationships. These understandings shaped behaviour in often unexpected ways, in processes that can be conceptualised as context design. They concluded that the role of people’s evolving language/media awareness in shaping online experiences needs to be taken into account by researchers wishing to effectively build a critical awareness for the digital age.

In her paper, Why are you texting me? Emergent communicative practices in spontaneous digital interactions, Maria Grazia Sindoni suggested that multimodality is a reaction against language-driven approaches that sideline resources other than language. However, language as a resource has been sidelined in mainstream multimodality research. Yet language still needs to be studied, but on a par with other semiotic resources.

In a study of reasons for mode-switching in online video conversations, she indicated that the technical possibility of doing something does not equate with the semiotic choice of doing so. In the case of communication between couples, she noted a pattern where intimate communications often involve a switch from speech to text. She also presented a case where written language was used to reinforce spoken language; written conventions can thus be creatively resemiotised.

There are several layers of meaning-making present in such examples: creative communicative functions in language use; the interplay of semiotic resources other than language that are co-deployed by users to adapt to web-mediated environments (e.g., the impossibility of perfectly reciprocating gaze, em-/disembodied interaction, staged proxemics, etc); different technical affordances (e.g., laptop vs smartphone); and different communicative purposes and degrees of socio-semiotic and intercultural awareness. She concluded with a critical agenda for research on web-mediated interaction, involving:

  • recognising the different levels (above) and their interplay;
  • encouraging critical awareness of video-specific patterns in syllabus design and teacher training;
  • promoting understanding of what can hinder or facilitate interaction (also in an intercultural light);
  • technical adaptivity vs semiotic awareness.

In their paper, Digital punctuation: Practices, reflexivity and enregistrement in the case of <.>, Jannis Androutsopoulos and Florian Busch referred to David Crystal’s view that in online communication the period has almost become an emoticon, one which is used to show irony or even aggression. They went on to say that the use of punctuation in contemporary online communication goes far beyond the syntactic meanings of traditional punctuation; punctuation and emoticons have become semiotic resources and work as contextualisation cues that index how a communication is to be understood. There is currently widespread media discussion of the use of punctuation, including specifically about the disappearance of the period. They distanced themselves from Crystal’s view of “linguistic free love” and the breaking of rules in the use of punctuation on the internet, suggesting that there are clear patterns emerging.

Reporting on a study of the use of punctuation in WhatsApp conversations by German students, they found relatively low use of the period. This suggests that periods are largely being omitted, and when they do occur, they generally do so within messages where they fulfil a syntactic function. They are very rare at the end of messages, where they may fulfil a semiotic function. For example, periods may be used for register switching, indicating a change to a more formal register; or to indicate unwillingness to participate in further conversation. Use of periods by one user may even be commented on by other users in a case of metapragmatic reflexivity. It was commented by interviewees that the use of periods at the end of messages is strange and annoying in the context of informal digital writing, especially as the WhatsApp bubbles already indicate the end of messages. One interviewee commented that the use of punctuation in general, and final periods in particular, can express annoyance and make a message appear harsher, signalling the bad mood of the writer. The presenters concluded that digital punctuation offers evidence of ongoing elaboration of new registers of writing in the early digital age.

In his presentation, The text is reading you: Language teaching in the age of the algorithm, Rodney Jones suggested that we should begin talking to students about digital texts by looking at simple examples like progress bars; as he explained, these do not represent the actual progress of software installation but are underpinned by an algorithm that is designed to be psychologically satisfying, thus revealing the disparity between the performative and the performance.

An interesting way to view algorithms is through the lens of performance. He reported on a study where his students identified and analysed the algorithms they encounter in their daily lives. He highlighted a number of key themes in our beliefs about algorithms:

  • Algorithmic Agency: ‘We sometimes believe the algorithm is like a person’; we may negotiate with the algorithm, changing our behaviour to alter the output of the algorithm
  • Algorithmic Authority (a term by Clay Shirky, who defines it as our tendency to believe algorithms more than people): ‘We sometimes believe that the algorithm is smarter than us’
  • Algorithm as Adversary: ‘We believe the algorithm is something we can cheat or hack’; this is seen in student strategies for altering TurnItIn scores, or in cases where people play off one dating app against another
  • Algorithm as Conversational Resource: ‘We think we can use algorithms to talk to others’; this can be seen for example when people tailor Spotify feeds to impress others and create common conversational interests
  • Algorithm as Audience: ‘We believe that algorithms are watching us’; this is the sense that we are performing for our algorithms, such as when students consider TurnItIn as their primary audience
  • Algorithm as Oracle: ‘We sometimes believe algorithms are magic’; this is seeing algorithms as fortune tellers or as able to reveal hidden truths, involving a kind of magical thinking

The real pleasure we find in algorithms is the sense that they really know us, but there is a lack of critical perspective and an overall capitulation to the logic of the algorithm, which is all about the monetisation of our data. There is no way we can really understand algorithms, but we can think critically about the role they play in our lives. He concluded with a quote from Ben Ratliff, a music critic at The New York Times: “Now the listener’s range of access is vast, and you, the listener, hold the power. But only if you listen better than you are being listened to”.

In her presentation, From hip-hop pedagogies to digital media pedagogies: Thinking about the cultural politics of communication, Ana Deumert discussed the privileging of face-to-face conversation in contemporary culture; a long conversation at a dinner party would be seen as a success, but a long conversation on social media would be seen as harmful, unhealthy, a sign of addiction, or at the very least a waste of time. Similarly, it is popularly believed that spending a whole day reading a book is good; but reading online for a whole day is seen as bad.

She asked what we can learn from critical hip-hop studies, which challenge discourses of school versus non-school learning. She also referred to Freire, who considered that schooling should establish a connection between learning in school and learning in everyday life outside school. New media, she noted, have offered opportunities to minorities, the disabled, and speakers of minority languages. If language is seen as free and creative, then it is possible to break out of current discourse structures. Like hip-hop pedagogies, new media pedagogies allow us to bring new perspectives into the classroom, and to address the tension between institutional and vernacular communicative norms through minoritised linguistic forms and resources. She went on to speak of Kenneth Goldsmith’s course Wasting Time on the Internet at the University of Pennsylvania (which led to Goldsmith’s book on the topic), where he sought to help people think differently about what is happening culturally when we ‘waste’ time online. However, despite Goldsmith’s comments to the contrary, she argued that online practices always have a political dimension. She concluded by suggesting that we need to rethink our ideologies of language and communication; to consider the semiotics and aesthetics of the digital; and to look at the interplay of power, practice and activism online.

Given the current global sociopolitical climate, it was perhaps unsurprising that the conference also featured a very timely strand on superdiversity. The symposium Innovations and challenges in language and superdiversity, chaired by Miguel Pérez-Milans, highlighted the important intersections between language, mobility, technology, and the ‘diversification of diversity’ that characterises increasing areas of contemporary life.

In his presentation, Engaging superdiversity – An empirical examination of its implications for language and identity, Massimiliano Spotti stressed the importance of superdiversity, but indicated that it is not a flawless concept. Since its original use in the UK context, the term has been taken up in many disciplines and used in different ways. Some have argued that it is theoretically empty (but maybe it is conceptually open?); that it is a banal revisitation of complexity theory (but their objects of enquiry differ profoundly); that it is naïve about inequality (but stratification and ethnocentric categories are heavily challenged in much of the superdiversity literature); that it lacks a historical perspective (he agreed with this); that it is neoliberal (the subject it produces is a subject that fits the neoliberal emphasis on lifelong learning); and that it is Eurocentric, racist and essentialist.

He went on to report on research he has been conducting in an asylum centre. Such an asylum seeking centre, he said, is effectively ‘the waiting room of globalisation’. Its guests are mobile people, and often people with a mobile. They may be long-term, short-term, transitory, high-skilled, low-skilled, highly educated, low-educated, and may be on complex trajectories. They are subject to high integration pressure from the institution. They have high insertional power in the marginal economies of society. Their sociolinguistic, ethnic, religious and educational backgrounds are not presupposable.

In his paper, ‘Sociolinguistic superdiversity’: Paradigm in search of explanation, or explanation in search of paradigm?, Stephen May went back to Vertovec’s 2007 work, focusing on the changing nature of migration in the UK; ethnicity was too limiting a focus to capture the differences of migrants, with many other variables needing to be taken into account. Vertovec was probably unaware, May suggested, of the degree of uptake the term ‘superdiversity’ would see across disciplines.

May spoke of his own use of the term ‘multilingual turn’, and referred to Blommaert’s emphasis on three key aspects of superdiversity, namely mobility, complexity and unpredictability. The new emphasis on superdiversity is broadly to be welcomed, he suggested, but there are limitations. He outlined four of these:

  • the unreflexive ethnocentrism of western sociolinguistics and its recent rediscovery of multilingualism as a central focus; this is linked to a ‘presentist’ view of multilingualism, with a lack of historical focus
  • the almost exclusive focus on multilingualism in urban contexts, constituting a kind of ‘metronormativity’ compared to ‘ossified’ rural/indigenous ‘languages’, with the former seen as contemporary and progressive, thus reinforcing the urban/rural divide
  • a privileging of individual linguistic agency over ongoing linguistic ‘hierarchies of prestige’ (Liddicoat, 2013)
  • an ongoing emphasising of parole over langue; this is still a dichotomy, albeit an inverted one, and pays insufficient attention to access to standard language practices; it is not clear how we might harness different repertoires within institutional educational practices

In response to such concerns, Blommaert (2015) has spoken about paradigmatic superdiversity, which allows us not only to focus on contemporary phenomena, but to revisit older data to see it in a new light. There are both epistemological and methodological implications, he went on to say. There is a danger, however, in a new orthodoxy which goes from ignoring multilingualism to fetishising or co-opting it. We also need to attend to our own positionality and the power dynamics involved in who is defining the field. We need to avoid superdiversity becoming a new (northern) hegemony.

In her paper, Superdiversity as reality and ideology, Ryuko Kubota echoed the comments of the previous speakers on human mobility, social complexity, and unpredictability, all of which are linked to linguistic variability. She suggested that superdiversity can be seen both as an embodiment of reality as well as an ideology.

Superdiversity, she said, signifies a multi/plural turn in applied linguistics. Criticisms include the fact that superdiversity is nothing extraordinary; many communities maintain homogeneity; linguistic boundaries may not be dismantled if analysis relies on existing linguistic units and concepts; and it may be a western-based construct with an elitist undertone. As such, superdiversity is an ideological construct. In neoliberal capitalism there is now a pushback against diversity, as seen in nationalism, protectionism and xenophobia. But there is also a complicity of superdiversity with neoliberal multiculturalism, which values diversity, flexibility and fluidity. Neoliberal workers’ experiences may be superdiverse or not so superdiverse; over and against linguistic diversity, there is a demand for English as an international language, courses in English, and monolingual approaches.

One emerging question is: do neoliberal corporate transnational workers engage in multilingual practices or rely solely on English as an international language? In a study of language choice in the workplace with Japanese and Korean transnational workers in manufacturing companies in non-English dominant countries, it was found that nearly all workers exhibited multilingual and multicultural consciousness. There was a valorisation of both English and a language mix in superdiverse contexts, as well as an understanding of the need to deal with different cultural practices. That said, most workers emphasised that overall, English is the most important language for business. Superdiversity may be a site where existing linguistic, cultural and other hierarchies are redefined and reinforced. Superdiversity in corporate settings exhibits contradictory ideas and trends.

In terms of neoliberal ideology, superdiversity, and the educational institution, she mentioned expectations such as the need to produce original research at a sustained pace; to conform to the conventional way of expressing ideas in academic discourse; and to submit to conventional assessment linked to neoliberal accountability. Consequences include a proliferation of trendy terms and publications; and little room for linguistic complexity, flexibility, and unpredictability. She went on to talk about who benefits from discussing superdiversity. Applied linguistics scholars are embedded in unequal relations of power. As theoretical concepts become fetishised, the theory serves mainly the interests of those who employ it, as noted by Anyon (1994). It is necessary for us to critically reflect, she said, on whether the popularity of superdiversity represents yet another example of concept fetishism.

In conclusion, she suggested that superdiversity should not merely be celebrated without taking into consideration historical continuity, socioeconomic inequalities created by global capitalism, and the enduring ideology of linguistic normativism. Research on superdiversity also requires close attention to the sociopolitical trend of increasing xenophobia, racism, and assimilationism. Ethically committed scholars, she said, must recognise the ideological nature of trendy concepts such as superdiversity, and explore ways in which sociolinguistic inquiries can actually help narrow racial, linguistic, economic and cultural gaps.

Rio de Janeiro viewed from Pão de Açúcar

Rio de Janeiro viewed from Pão de Açúcar. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

AILA 2017 wrapped up after a long and intensive week, with conversations to be continued online and offline until, three years from now, AILA 2020 takes place in Groningen in the Netherlands.

Mapping out the future of VR and AR

Mobile World Congress
Shanghai, China
30 June – 1 July, 2017

The Yu Garden with the Shanghai Tower behind

The Yu Garden (豫园) with the Shanghai Tower (上海中心大厦) behind. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

After flying up from Guilin on 29 June, I managed to catch the last two days of the Mobile World Congress in Shanghai. An enormous event that brought together technologists, marketers and investors, and showcased new technologies from phones to drones and robots to cars, it also hosted a series of summits on specific themes. I spent Friday 30 June at the VR and AR Summit, where industry speakers offered their perspectives on the latest developments and the current challenges facing VR and AR.

In his presentation, What is the future of VR & AR?, Christopher Tam (from Leap Motion) argued that there are 5 key elements of VR and AR, namely immersion, imagination, availability, portability and interaction. Before the advent of VR/AR, it was as if our computing platforms only allowed us to peek at the possibilities through a tiny keyhole, but now we can open the door into a utopian world, he said.

Immersion needs high quality graphics and rapid refresh rates; imagination needs good content; but interaction is hard to measure. One way of measuring interaction is by considering human-machine interaction bandwidth. This is a fundamental factor to unlock the mainstream adoption of VR/AR and, while a lot of progress has been made on the other elements, this remains a bottleneck which the industry is currently focused on addressing. The leap from 1D to 2D computing required the invention of the mouse to accompany the keyboard. A mouse works for 2D because it allows one-to-one mapping; however, it is not sufficient in a 3D world, because in such a world we need to do more than moving, selecting, pointing or clicking. Interaction in a 3D world should be inspired by the way we interact with the real world; we should use the model of ‘bare hands’ interaction, given that this is our primary way of interacting with the real world. It is natural, universal, unencumbered, and accessible. In education, children can study in a hands-on style, with more fun and better retention; this is how children learn in the real world. In training, people can practise how to handle complex situations in hands-on ways. In commerce, consumers can enjoy the digital world and be impressed at the first try. In healthcare, we can enable diagnosis, physical therapies and rehabilitation; this moves the barrier between healthcare givers and their patients. In art and design, we can express ourselves by creating in a 3D manner with no restraints. In social relations, we can hang out and interact with friends. In entertainment, there will be easier, more intuitive controlling, and deeper immersion; users can become the protagonists in the stories we are telling, not just operating a person but becoming that person. Thus, hand tracking brings to life the advantages of VR/AR in almost all verticals. He concluded by demonstrating Leap Motion’s hand tracking technology.

In his presentation, The future of virtual reality in China, James Fong (from Jaunt China) suggested that VR is the next stage in a long human quest to experience and interact with captured and created realities; this stretches from cave art through painting, photography, gramophones, motion pictures, television and 3D films to AR and VR. He suggested that there is no need to separate VR and AR as they will merge soon. He briefly pointed out some questions of looming importance: we want Star Trek’s Holodeck or the Matrix experience, but we need to ask how this affects our humanity. Will we become isolated from each other? Will we appreciate human connections? Will we not want to leave the perfect VR/AR world?

In VR/AR storytelling, we can be part of a scripted narrative or take our own pathway through a free-form construct; engage in first-person participation or third-person observation; venture alone or interact with n-number of participants; and focus on private enjoyment or share experiences with family, friends and the world. It will however take a long time for high quality and compelling content to arrive, in part because VR will disrupt every element of content creation. We are used to third-person stories and it will take time to get used to first-person stories. We haven’t yet developed the creative language for working with VR. However, all of the major companies that run operating systems are moving to support VR natively, and this will usher in major developments.

He wrapped up by looking at the Chinese market, where there is no Google, Facebook, Amazon or Twitter, and where the market is dominated by local players like Baidu, WeChat, Weibo, iQiyi, Youku, Tencent, Alipay and WeChat Pay. Therefore a lot of international products don’t work in this country. Some challenges in China are the same as in the rest of the world (e.g., poor headset viewing experiences; market experimenting with live and 360) and some are different (VR experience centres/cafés in China keep interest high; content quality has not improved due to a lack of financing; and the camera and higher quality headset market is starting to pick up). He predicted that China could be the largest VR market in the world by 2018.

The slogan of the 2017 Mobile World Congress, Shanghai. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

In a panel discussion moderated by Sam Rosen (ABI Research), with panel members Alvin Wang (Vive), James Fong (Jaunt China) and Christopher Tam (Leap Motion), it was suggested that 5G will make a big difference to VR/AR adoption because if processing is done online at high speed, we will be able to use much less bulky headsets with less drain on batteries. Alvin Wang mentioned that it will soon be possible to wear headsets that incorporate facial recognition and emotion recognition based on microgestures, allowing interviewers to sense whether an interviewee is nervous or lying, or teachers to sense whether a student understands. He claimed that one of the scarcest commodities in the world is good teachers, but AI technology can give everyone personalised access to the best teachers. He mentioned a project to put 360 cameras in MIT classes so that anyone in the world can join a class by high profile professors. James Fong talked about the power of VR to give people a sense of real-world events; he gave the example of being able to place viewers in the context of refugees arriving in another country, seeing the scale of the phenomenon, maybe being able to touch the boat the refugees arrived on, and thereby building more empathy than is possible with traditional news reports on TV.

In his presentation, The next big test for HMDs: Is the industry prepared?, Tim Droz (from SoftKinetic) said the aim of VR and AR is to take you somewhere other than your current location. There are two types of interaction which are theoretically possible in VR and AR environments; inbound interaction through sight, hearing, smell, taste, and haptics; and outbound interaction through the mind, gaze, facial expression, voice, touch, pushing, knocking, grabbing (etc), gesture, body expression, and locomotion. At the moment only a few of these are available, but as more are built into our equipment, it will become more bulky and unwieldy. However, for mass adoption, a lighter and more seamless experience is needed. He demonstrated some SoftKinetic hardware (like the time-of-flight sensor) and software (like human tracking and full body tracking software) which will make a contribution to interaction through hand movements. This greatly strengthens users’ sense of presence.

In his presentation, 360° and VR User Generated Content – Millions of 360° cameras and smartphones in 2017!, Patrice Roulet (from ImmerVision) suggested that it will soon become normal for everyday smartphones to be used to record and share 360 content, in such a way that it captures your entire environment and the entire moment. It will only take two clicks to share such content on social media. To capture this content, it’s necessary to have a very good lens (such as ImmerVision’s panomorph lens which provides a high quality image across the whole field of view, can be miniaturised for mobile devices, and allows multi-platform sharing and viewing), and advanced 360 image processing. The panomorph lens can be used for much more than capturing 360 images; the internet of things (IoT) is about to evolve from connected devices to smart devices, and this technology has the potential to play a role as part of artificial intelligence (AI) in the upcoming ‘Cambrian explosion’ of the IoT.

In his presentation, VR content: Where do we go next?, Andrew Douthwaite (from WEARVR) stated that one key question is what comes first: adoption of hardware or high quality content; it’s something of a chicken and egg situation. He showed an example of a rollercoaster VR experience on a headset linked to a desktop computer; he noted that many people initially experience some nausea due to the sensory conflict that arises from, for example, sitting still while immersed in a moving VR experience. The emergence of mobile VR is now bringing VR experiences to a much wider audience; Google Cardboard is currently the most widespread example. There is a lot of 360 content on YouTube, and games like Raw Data are helping to drive the industry forward. Google Earth VR is another great example and will help VR reach the mass market, and could impact travel and tourism. New software is now making it possible for users to create VR characters and then inhabit their bodies and act as those characters.

Important future developments are wireless and comfortable VR headsets and more natural input mechanisms, including hand presence. One problem is that much 360 video content is currently of low quality; there is no point in having high quality headsets unless there is also high quality content available. The future of content, he said, lies in storytelling and narrative-based content; social interaction; healthcare; property; training; education; tourism; therapy and mental health (e.g., mindfulness and meditation); serialised content; lifestyle and productivity (though this might be more AR); and WebVR (an open standard which is a kind of metaverse, allowing you to have VR experiences in your web browser).

In his presentation, VR marketing, Philip Pelucha (from 3D Redshift) suggested that the next generation of commerce will not be browser-based; he gave the example of a 360 video of a product leading to a pop-up store allowing customers to further engage with the product. Noting that we already have online universities, he asked how long before virtual reality universities appear. He mentioned that soon we won’t have to commute to work because our phones and laptops will turn the world into our virtual office. In fact, he said, this is already beginning to happen, and when today’s children grow up, they won’t understand why you would have to go to an office to work, or to a shop to buy something. He also spoke about one major area of current development as being language education; a VR/AR app for immersive learning, or to support you when travelling, could be extremely helpful.

In his presentation, Bring the immerse experience to entertainment, movie and live event, Francis Lam (from Isobar China) showcased innovative examples of 360 videos. He showed the B(V)RAIN headset that combines VR with neural sensors; as your emotions change, what you see changes. In effect, the hardware allows you to visualise your mental state, and this can have consequences such as the targets you face in a shooter game, or the taste combinations in drinks that are recommended to you.

He concluded with some issues for consideration. Bad VR, he pointed out, can make you feel sick, so it needs to be high quality and low latency. VR is not just about watching, but rather about experiencing; it is about how, from a first-person point of view, you can go into a scene and experience it. VR is not just visual; audio is important, but there can be other sensors and tactile feedback. We should also ask to what extent VR can be a shared experience, where someone wearing a headset can interact with others who are not. VR is good for communication, a point which is well understood by Facebook; for example, with VR you can make eye contact in a way that is not possible in video chat. VR can allow us to explore new possibilities, such as experimenting with genders. In fact, VR hasn’t arrived yet; there is much more development to happen. Finally, he stated, VR is really not content, it is a medium.

China Mobile slogan, 2017 Mobile World Congress, Shanghai

China Mobile display, 2017 Mobile World Congress, Shanghai. Photo by Mark Pegrum, 2017. May be reused under CC BY 4.0 licence.

There is no doubt that industry perspectives on new technologies differ in some ways from those usually heard at academic and educational conferences, but is important that there is an awareness, and an exchange, of differing views between technologists and educators. After all, we face many of the same challenges, and we stand to gain from collaboratively developing solutions that will work in the educational and other spheres.

Skip to toolbar