Friday, 21 April 2017

Novel initiative Teach Out: Fake news detecting #criticalthinking #mooc

If you have just a bit of time this week, and you are interested in new ways of online teaching as well as critical thinking... this is a fabulous initiative. The “Fake news, facts and alternative facts” MOOC is part of a teach out course (brief yet meaningful just-in-time learning initiative focused on a hot topic).

Course starts on 21 April 2017 (today)
Course given by the University of Michigan, USA

This is not just a MOOC, actually, it being a MOOC is the boring part. What is really interesting is the philosophy behind the teach out, and the history behind the teach out events. This feels a bit more like an activist driven teaching, admittedly here with a renowned institute.



Brief course description
Learn how to distinguish between credible news sources and identify information biases to become a critical consumer of information.
How can you distinguish between credible information and “fake news?” Reliable information is at the heart of what makes an effective democracy, yet many people find it harder to differentiate trustworthy journalism from propaganda. Increasingly, inaccurate information is shared on social networks and amplified by a growing number of explicitly partisan news outlets. This Teach-Out will examine the processes that generate both accurate and inaccurate news stories and the factors that lead people to believe those stories. 

Participants will gain skills help them to distinguish fact from fiction.

This course is part of a Teach-Out, which is:
·        an event – it takes place over a fixed, short period of time
·        an opportunity – it is open for free participation to everyone around the world
·        a community – it will be joined by a large number of diverse individuals
·        a conversation – an opportunity to give and take ideas and information from people

The University of Michigan Teach-Out Series provides just-in-time community learning events for participants around the world to come together in conversation with the U-M campus community, including faculty experts. The U-M Teach-Out Series is part of our deep commitment to engage the public in exploring and understanding the problems, events, and phenomena most important to society.

Teach-Outs are short learning experiences, each focused on a specific current issue. Attendees will come together over a few days not only to learn about a subject or event but also to gain skills. Teach-Outs are open to the world and are designed to bring together individuals with wide-ranging perspectives in respectful and deep conversation. These events are an opportunity for diverse learners and a multitude of experts to come together to ask questions of one another and explore new solutions to the pressing concerns of our global community. Come, join the conversation!

(Picture: http://maui.hawaii.edu/hooulu/2017/01/07/the-real-consequences-of-fake-news/ )

Companies should attract more Instructional Designers for training #InstructionalDesign #elearning

Online learning is increasingly pushing university learning and professional training into new directions. This means common ground must be set on what online learning is, which approaches are considered as best practices and which factors need to be taken into account to ensure a positive company wide uptake of the training. Although online learning has been around for decades, building steadily on previous evidence-based best practices, it is still quite a challenge to organize online learning across multiple partners, let alone across cultures (in the wide variety of definitions that culture can have).

Earlier this month Lionbridge came out with a white paper entitled “steps for globalizing your eLearning program”. It is a 22 page free eBook, and a way to get your contact data. The report is more corporate than academically inclined (subtitle is ‘save time, money and get better results’), and offers an insight look of how companies see global elearning and which steps to take first. But when reading the report - which does provide useful points - I do feel that corporate learning needs to accept that instructional design expertise is necessary (the experts! the people!) and needs to be attracted by the company, just like top salespeople, marketing, HR … for it is a real profession and it demands more than the capacity to record a movie and put it on YouTube!

In their first step they mention: Creating a globalizing plan
  • Creating business criteria
  • Decide on content types
  • Get cultural input
  • Choose adaptation approach

The report sets global ready content as a baseline: this section mentions content that is culturally neutral. Personally, I do not belief cultural neutrality is possible, therefor I would suggest using a cultural, balanced mix, e.g. mixing cultural depictions or languages, even Englishes (admitting there is more then one type of English and they are all good). But on the bonus side, the report also stresses the importance of using cultural native instructional design (yes!), which I think can be learner-driven content to allow local context to come into the global learning approach. Admittedly, this might result in more time or more cost (depending on who provides that local content), but it also brings the subject matter closer to the learner, which means it brings it closer to the Zone of Proximal Development (Vygotsky) or enables the learner to create personal learning Flow (Csikszentmihalyi) or simply to allow the learner to think ‘this is something of interest to me, and I can learn this easily’.

In a following step: Plan ahead for globalisation
  • Legal issues: looking at IPR or the actual learning that can be produced. 
  • Technology and infrastructure: infrastructure differs. 
  • Assessment and feedback mechanisms: (yes!) Feedback, very important for all involved
  • Selecting a globalizing partner

The report is brief, so not too much detail is given on what is meant with the different sections, but what I did miss here was the addition of peers for providing feedback, or peer actions to create assessments that are actually contextualized and open to cultural approaches. No mention of the instructional design experts in this section either.
In the third section a quick overview is given on what to take into account while creating global elearning content, again the focus is on elements and tools: using non-offensive graphics, avoiding culturally heavy analogies, neutral graphics…, not on the actual instruction, which admittedly would take up more than 22 pages, but the instructional approach is to me the source of learning possibilities.

Promoting diverse pedagogy
The final part of the report looks at the team you need, but …. Still no mention of the instructional design expert (okay, it is a fairly new title, but still!). And no mention of the diversity in pedagogy that could support cultural learning (not every culture is in favor of Socratic approaches, and not every cultural group likes classic lecturing).

Attract instructional designers
While the report makes some brief points of interest, I do feel that it lacks what most reports on training are lacking, they seem to forget that online instruction is a real job, a real profession with real skills and which does take years to become good at, just like any STEM or business oriented job. This does indicate that corporations are acknowledging an interest in online training (and possible profit), but … they still think that it can be built easily and does not require specific expertise.
There is no way around it: if you want quality, you need to attract and use experts. If you want to build high quality online training that will be followed and absorbed by the learner, interactions, knowledge enhancement, neurobiological effects… all of this will matter and needs to be taken into account (or at least one needs to be aware of it).
Now more than ever, you cannot simply ‘produce a video’ and hope people will come. There are too many videos out there, and a video is a media document, not necessarily a learning element. Learning is about thinking about the outcome you want to have, and then work backwards, breaking the learning process down into meaningful steps. Why do you use a video? Why do you use a MCQ? Does this really result in learning, or simply checking boxes and consuming visual media?

Building common ground as a first global elearning step
Somehow I feel that the first step should include overall acceptance of a cooperatively build basis:
What are our quality indicators (media quality, content quality, reusability, entrepreneurial effect of the learning elements, address global diversity in depicting actors (visual and audio), …)

Which online learning basics does everyone in the company (and involved in training) need to know: sharing just-in-time learning (e.g. encountering a new challenge: take notes of challenge and solution), sharing best practices on the job (ideal for mobile options), flipped lectures for training moments (e.g. case study before training hours, role play during workshops…), best practices for audio recordings … these learning basics can be so many things, depending on the training that needs to be created, but it needs to be set up collaboratively. If stakeholders feel they will benefit from training, and they are involved in setting up some ground rules and best practices, they are involved. It all comes down to: which type of learning is needed, what does this mean in terms of pedagogical options available and known, and what do the learners need and use. 

Tuesday, 21 February 2017

2 Free & useful #TELearning in Higher Ed reports #elearning #education

These two reports give a status of TELearning in 2016: one analysing the Technology Enhanced Learning for Higher Education in the UK (233 pages, with appendixes starting at page 78) and case studies of Technology Enhanced Learning (48 pages, with nice examples). I give a brief summary below.

The reports were produced by UCISA (Oxford univesity based network) representing many major UK universities and higher education colleges and it states to have a growing membership among further education colleges, other educational institutions and commercial organisations interested in information systems and technology in UK education.

The used definition of TELearning is: "Any online facility or system that directly supports learning and teaching. This may include a formal VLE (virtual learning environment), e-assessment or e-portfolio software, or lecture capture system, mobile app or collaborative tool that supports student learning. This includes any system that has been developed in-house, as well as commercial or open source tools."

Both reports provide an interesting (though UK-oriented) read. Here is a short overview of what you can find in them:

The report focusing on the TELearning for HE in UK (based on the TELearning survey), I have put the main conclusions next to the main chapters:

Top 5 challenges facing institutions: Staff Development is the most commonly cited challenge, Electronic Management of Assessment, lecture capture/recording continues to move up, technical infrastructure, legal/policy issues.

Factors encouraging the development of TELearning: Enhancing the quality of learning and teaching, meeting student expectations, improving student satisfaction are most common driver for institutional TEL provision. Availability of TEL support staff, encourages the development of TEL, feedback from students, availability and access to tools, school/departmental senior management support. In terms of barriers for TELearning: lack of time, development & consolidating, culture continues to be a key barrier, with Departmental\school culture, and Institutional culture, internal funding, and lack of internal sources of funding to support development.

Strategic questions to ask when considering or implementing TELearning: with Teaching, Learning and Assessment consolidating, the rise of the Student learning experience/student engagement strategy, corporate strategy and library and Learning Resources.

TELearning currently in use: main institutional VLE remains Blackboard and Moodle.
Moodle remains the most commonly used platform across the sector, but rising alternative systems such as Canvas by Instructure, and new platforms eg. Joule by Moodlerooms. SharePoint has rapidly declined. An increase in the number of institutions using open learning platforms such as FutureLearn and Blackboard’s Open Education system. Evaluation activity in reviewing VLE provision: conducting reviews over the last two years. TEL services such as lecture capture is the second most commonly reviewed service by all over the last two years.

Support for TELearning tools: e-submission tools are the most common centrally supported
software, ahead of text matching tools such as Turnitin, SafeAssign and Urkund. Formative and summative e-assessment tools both feature in the Top 5, along with asynchronous communication
tools. Adoption of document sharing tools across the sector and the steady rise in the use of lecture
capture tools. Podcasting tools continue to decline in popularity and the new response items electronic exams and learning analytics appear not to be well established at all as institutional services, with only a handful of institutions currently supporting services in these areas.
Social networking, document sharing and blog tools are the common non-centrally supported tools. TEL tools are being used to support module delivery. Blended learning delivery based on the provision of supplementary learning resources remains the most common use of TEL. Only a small number of institutions actually require students to engage in active learning online across all of their programmes of study. Increasing institutional engagement in the delivery of fully online courses, with over half of 2016 respondents now involved. Growing adoption of MOOC platforms by institutions, but less than half of respondents are pursuing open course delivery.
Little change in the range of online services that higher education institutions are optimising for access by mobile devices. Access to course announcements, email services and course materials and learning resources remain the three leading services optimised for mobile devices. Library services, are being optimised. Optimising lecture recordings at the same level as 2014. The most common ways in which institutions are promoting the use of mobile devices are through the establishment of a bring your own device (BYOD) policy and by loaning out devices to staff and students. Funding for mobile learning projects has reduced in scale.
Outsourcing of institutional services grows: student email, e-Portfolio systems, VLEs and staff email. The type of outsourcing model is dependent on the platform being outsourced: Software as a Service (SaaS) cloud-based model for email services, and to use an institutionally managed, externally hosted model for TEL related tools, such as e-Portfolios and the VLE for blended and fully online courses.
National conferences/seminars and internal staff development all remain as key development activities. Increase in the promotion of accreditation activities, in particular for HEA and CMALT
accreditation.
Electronic Management of Assessment (EMA) making the most demand on TEL support teams. Lecture capture and Mobile technologies as well. The demand from Learning Analytics and from distance learning/fully online courses continues to increase. A new entry which might be expected to make more demands in the future is Accessibility; in particular, demands made by changes to the Disabled Students’ Allowance in the English higher education sector.

A number of appendixes: full data, a longitudinal analysis of TELearning over the past years (going back to 2001), questions that were used for the longitudinal analysis.

The report focusing on the case studies from TELearning:
These case studies are a companion to the earlier report mentioned above. The idea is that the case studies enable to probe themes in the data and shed light on TEL trends through the eyes of representative institutions, offering context to the findings of the overall report.
In each of the case studies, the institutions provide answers to the following TELearning sections: used TELearning strategy, TEL drivers, TEL provision, TEL governance and structures, TEL-specific policies, Competition and Markets Authority (CMA) strategy, Teaching Excellence Framework, Distance Learning and Open Learning, and Future challenges. The diversity of institutes interviewed give a good perspective of the TEL landscape within Higher Education in the UK. 

Friday, 17 February 2017

Recognising Fake news, the need for media literacy #digitalliteracy #literacy #education

I was working on a blogpost on books focusing on EdTech people (the woman, the tasks…), but then I opened up YouTube and I saw that president Trump had his first solo press conference.

I guess we can all benefit from Mike Caulfield's ebook (127 page) on web literacy for students (online version) or here for other versions including pdf), a fabulous book with lots of links and useful actions to become (more) web literate (thank you Stephen Downes for bringing it to my attention). 

After watching it, I thought there was a clear need (for me as an avid supporter of education) to refer to initiatives on the topic of real and fake news, because honestly I do not mind if someone calls something fake or real, as long is that statement is followed by clear arguments describing what you think is fake about it, and why. Before doing that, I want to share the reason for this shift in attention.

I love Amerika, for several reasons: where Europe stays divided, the United States have managed to get its nations to work together, while leaving enough federal freedom to adapt specific topics according to individual nation’s believes; I have worked and honestly like to work with Americans (of all backgrounds) and American organisations, truly I am in complete awe of the Bill of Rights, and the way the constitution is securing freedom for all. I know that a goal as ‘freedom for all’ is difficult to attain, but at least it is an openly set vision, put on paper. I mean, I truly respect such strong incentive to promote freedom for all citizens within a legal framework and the will to achieve that freedom. And due to this love for the United States, I felt that Trump is okay. In democratic freedom, the outcome might not be of anyone’s liking, but … history has shown that democratic freedom can swing in a lot of ways and that it this diversity nurtures new ideas and insights along the way.

However, while watching the press conference I got more and more surprised by what was said and how: there were clear discriminatory references, which I do not think befit a President of all the American people. But okay, to each his own and rhetorical styles can differ (wow, can they differ), but the ongoing remark and reference on Fake News that kept coming up as an excuse and used as a non-sequitur at any point during the press conference just got to me. Manipulation has many faces, and only education can help built critical minds that will be able to judge for themselves, and as such be able to distinguish real from fake news. To me, even if you refer to ‘this is fake news’, I want to hear just exactly what you mean: which part of what news is fake and why. Enlighten me would be the general idea.  

Fake news and believing it: status
A Stanford study released in November 2016, concluded that 82% of middle-schoolers couldn’t distinguish between an ad labeled “sponsored content” and a real news story on a website. Which seems to indicate that somewhere we are not addressing media or digital literacy very well. On the reasons why this lack of media literacy is occuring, I like the viewpoint of Crystle Martin who looks at misinformation and warcraft in this article; saying:
Teaching information literacy, the process of determining the quality and source of information, has been an emphasis of the American Association of School Librarians for decades. However, teaching of information literacy in school has declined as the number of librarians in schools has declined.
Luckily, there are some opinions and initiatives on distinguishing between fake and real news. Danah Boyd had another look at the history of media literacy, focusing on the cultural context of information consumption that were created over the last 30 years. Danah shared her conclusions in a blogpost on 17 January 2017, entitled 'Did media literacy backfire?' She concluded that media literacy had backfired, in part as it was built upon assumptions (e.g. only media X, Y and Z deliver real news) which often does not relate to the thinking of groups of people that prefer other news sites A, B and C.  

Danah describes it very well:
Think about how this might play out in communities where the “liberal media” is viewed with disdain as an untrustworthy source of information…or in those where science is seen as contradicting the knowledge of religious people…or where degrees are viewed as a weapon of the elite to justify oppression of working people. Needless to say, not everyone agrees on what makes a trusted source.
The cultural and ethical logic each of us has, is instilled in us from a very early age. This also means we look upon specific thinking as being ‘right’ or ‘wrong’. And to be honest, I do not feel this cultural/ethical mind set will deter all of us from being able to become truly media literate. As long as we talk to people across the board. As long as colliding thoughts fuel a dialogue, we will learn from each other and be able to understand each other in better ways (yes, I am one of those people that think that dialogue helps learning, and results in increased understanding, thank you Socrates).
If this is the case, than we need to do a better job of improving media literacy, including listening to people with other opinions and how they see it. It is a bit like the old days, where the people from the neighborhood go to the pub, the barbershop, or any get together were people with different opinions meet, yet feel appreciated even during heated debates.  

Maha Bali, in her blogpost “Fake news, not your main problem” touches on the difficulty of understanding all levels of the reports provided in the news and other media. Sometimes it does demand intellectual background (take the Guardian, I often have to look up definitions, historical fragments etc. to understand a full article, it is tough on time and tough to get through, but … sometimes I think it is worth the effort). Maha Bali is a prolific, and very knowledgeable researcher/educator. She touches on the philosophical implication of ‘post-truth’ and if you are interested, her thesis subject on critical thinking (which she refers to in her blogpost) will probably be a wonderful read (too difficult for me). So, both Maha and Danah refer to the personal being not only political, but also coloring each of our personal critical media literacies. 

If media literacy depends on personally developing skills to distinguish fake (with some truth in it) from real (with some lies in it), I gladly refer to some guidelines provided by Stephen Downes, as they are personal. One of the statements I would think is pivotal to distinguish between fake and real news, is understanding that truth is not limited to one or more media papers/sites/organisations, it is about analysing one bit of news at a time. It is not the organisation that is authoritative at all times, it is the single news item that is true or at least as real as it can get. So, here is a list of actions put forward by Stephen Downes on detecting fake news : Trust no one, look for the direct evidence (verification, confirmation, replication, falsification), avoid error (with major sources of error being: prediction, relevance, precision, perspective), take names (based on trust, evidence and errors), and as a final rule he suggests to diversify in sources (which I really believe in, the pub analogy). 

Another personal take on detecting fake news comes from Tim O'Reilly who describes a personal story, and while doing so he sheds some light on how an algorithm might be involved. 

Thinking about algorithms, you can also turn to some fake news detectors:

The BS detector: a fabulous extension to the Mozilla browser. Looks at extreme bias, conspiracy theory, junk science, hate group, clickbait, rumor mill… http://bsdetector.tech/

Snopes: started out as a website focused on detecting urban legends, and turned into an amazing fact checking website (amazing as you can follow the process of how they look at a specific item and then decide whether it is fake).  ( http://www.snopes.com/

And finally, for those who like to become practical asap: a lesson plan on fake news provided by KQED http://ww2.kqed.org/lowdown/wp-content/uploads/sites/26/2016/12/Fake-news-lesson-plan.pdf

In my view, the increase in accepting the idea of fake news is related to the increased divide within society. So, in a way I agree with Danah Boyd: we read and agree with specific people and news sources, and so we filter our sources to those people and media. Seldom do we read up on sources from media we do not agree with, or people we disagree with. It used to be different, as discussions around specific topics were discussed in our community, with a mix of ideas and preferences.
So maybe media literacy could be done on a community level, where everyone gets together and shares their opinion on certain topics. We recreate the local pub or cafĂ©, where everyone meets and gets into arguments on what they believe (or not). Media literacy – to me – is about embracing diversity of opinion, listening, seeing the arguments from the other side and … making up your own mind again.

So, coming back to president Trumps referencing to fake news. In terms of increasing media literacy, I do not have a problem with referencing to something that is seen as fake news, I do have a problem with that fact not being explained: what is fake about it? Why? And again, with saying that, I mean a real explanation, not simply repeating ‘this is fake news. It is. I tell you it is’ (feel free to imagine the tone of voice that such a sentence might be delivered in), now give me the facts, because I do want to know why you or anyone else is labeling something as true or false.

Wednesday, 15 February 2017

How do Instructional Designers support and add to teacher knowledge

As online learning becomes more known, the quality of the delivered online materials become more essential, as learners can (partly) decide which courses they will follow based on the quality of the course material. One of the challenges is to give teachers and trainers an idea of how instructional designers can help (IDs are schooled in online learning options) and what instructional designers can bring to the interdisciplinary learning/teaching team (a broader online and blended learning knowledge, specifically aimed at online or blended interactions, this relies on specific theoretical frameworks that facilitate practical implementations). So, being asked by EIT InnoEnergy to provide an overview of why Instructional Designers are an important Human Resource profile to ensure high quality online or digital learning material, I put together this brief presentation. The slides are text rich so course partners (SELECT) can have another look after the presentation and an ongoing conversation with local Instructional Designers might be started.

In the meantime I am continuing the inspiring work on the Instructional Design Variation Matrix (a practical guide for Instructional Designers, a bit of an extended job aid).

(picture: deeply thinking teachers from KTH Sweden, Polito Italy, UPC Spain, IST Portugal, Aalto Uni Finland listening to online learning experiences at InnoEnergy SELECT kick-off meeting)



Friday, 20 January 2017

Tips for a PhD defense or viva #phd

It is with quite some pleasure that I was awarded the PhD in Educational Technologies last week.

The UK version of a PhD defense is called a Viva, which resembles a closed oral examination (open book) with one external examiner (connected to another University than the one you are at) and an internal examiner (affiliated to your own University, but with whom nor yourself, nor your supervisors have co-authored a paper – so not closely professionally related). In addition to that, you have one observer (normally that is one of your supervisors, she or he will take notes on what is said, and possible recommendations) and a chair (Doug Clow, who explains all the details of the viva and who sees to it that everyone stays hydrated and in an objective state of mind). In my case the external examiner was Neil Morris (Leeds University), the internal examiner was Allison Littlejohn (The Open University, UK). The external examiner usually leads the questioning, which was also the case in my viva. Btw the central question to my PhD thesis was 'what characterizes the informal, self-directed learning of experienced adult, online learners engaged in individual or social learning using any device to follow a FutureLearn MOOC'. It resulted in a conceptual framework for informal self-directed learning, using a method that provided the voices (experiences) of the learners to come through, as such providing a theory from the ground up (in most cases a framework starts from theory, providing a top down dynamic to come to the conclusions). A draft version of the thesis can be read here. The picture shows my two supervisors (Mike Sharples and Agnes Kukulska-Hulme) and Rebecca Ferguson (who was kind enough to be my main examiner during the mock viva) and my wonderful colleagues Janesh Sanzgiri, Jenna Mittelmeier and Garron Hilaire.

The questions started off mildly (with a fair question, which aims at making you feel comfortable, so along the lines of: briefly describe your research, why were you interested in the topic you investigated). From there the questions tend to become more complex and they tend to demand a more in-depth answer. Normally the questions will start at the beginning of your thesis, and consist of overall (e.g. how did you select your literature) as well as very detailed questions (why did you select only that fragment here) which the examiner found either of interest, confusing, or lacking. This means you really need to understand why you did what you did, throughout your thesis.
These are some of the questions I got, with some additional information:  
  • How do your research questions follow from your literature review? During preparation I linked all of my research questions to the most influential paper I mentioned in my literature review. This is also handy for other literature related questions, as you memorise core papers and their subsequent authors.
  • Which element of your findings gave rise to the most poignant discussion; and can you list the main authors for that discussion reflecting on that part of your findings? Why did you limit yourself to these authors for the discussion on that part of X findings? I can tell you, this was a tough question. It means you relate the literature of your literature review and use some of those papers to fuel the scientific discussions on your findings taking into account what the literature already pointed to, as opposed to what your findings show to be different (or similar, as you will most likely find that your findings have commonalities as well as differences with prior research).   
  • What is the relation between the research of your pilot study and the main study? In my case the pilot study had different research questions (and sub-questions) than the main study, this had to be explained, and this had an effect on the findings. This change resulted from the qualitative, exploratory starting point of my study, and the resulting findings from the pilot which urged me to rephrase the research questions of my main study a bit.
  • Is there a theory runs through your investigation, and has an effect on the literature you choose to focus on, the methodology, and research instruments? In my case that was socio-constructivism, briefly: one of the theories I used (connected to the pedagogical design of FutureLearn) is Laurillards conversational framework, specifically the informal conversational framework, which is related to the socio-constructivist view of the world. Additionally, I choose to use Charmaz’s constructing Grounded Theory approach, which also is deeply embedded in the socio-constructivist heritage, and I used multiple learner voices to look at emerging codes, categories and concepts coming from multiple viewpoints (as I used multiple data sources provided to me over time by the participants in my study – participants were asked to self-report their learning through learning logs, sent at different moments throughout their learning experience with FutureLearn MOOCs.
  • Questions could also be limited in scope, for instance: what is your definition of socio-constructivism? Prepare core definitions that are key to your thesis.
  • How did your research questions guide your coding? Tough one, as there is a tension between qualitative research which starts from the concept of no-assumption, to research questions inevitably guiding codes (e.g. codes related to the sub-question of technology for learning).
  • Or considering one area of my findings: what type of definition are you using for social learning? And how does it differ from other social learning definitions? In my case, I used social learning as it is defined by Laurillard, which fits FutureLearn, and is based on the notion of Socratic dialogue, which means it involves at least two active people. This stands in contrast with for instance Bandura (who goes back to a behaviorist view as well, as Bandura’s definition of social learning can be traced back to Pavlov), where Bandura also sees passive learning (e.g. lurking) also as a form of social learning, as it is still embedded in a the whole of society as the learning environment and is part of observing.
  • Two difficult questions were raised during my mock viva. A mock viva is a sort of general rehearsal for your viva. It usually involves your supervisors, as well as a colleague who wishes you well and wants to strengthen your viva skills. In my case, I head the pleasure of having Rebecca Ferguson as my mock viva examiner and she is fabulous! I also used some of her tips in preparation for the mock viva, have a look at the top 40 viva questions she listed as important here. One of the questions she asked me was: what is the difference between MOOC learning and other online learning? E.g. active presence of a facilitator, scale, length of course versus length of curriculum, prerequisites, compulsory or not. Another difficult question was: why did not you taken into account the MOOC educators? Where the better answer would have been: I did take them into account educators, but only in the roles in which they were seen by the learners, not in their classical roles as defined by educational institutes.

Some general remarks:
Make sure you know your thesis, and use parts of it when looking for answers to the questions you are getting. I mean, physically point to your thesis, this will buy you some time to find the right answer, and will give you some additional content support.

Look confident and be succinct. This gives the idea of professionalism to your person, a research professionalism. It does not matter if you belief it, just know that you are indeed the expert on that topic, so you can and must be confident.

The questions you get can come from a variety of thoughts: interest in the approach, doubt on what you wrote, or simple trickery to see whether you do really understand what you are doing. This means that at times you might here a question, which prompts an internal voice to say “Hey! But I did do that, or I do have an argument for doing it that way!”, in that case voice your answer and do not be afraid to stick with your thesis, or correct the examiners. Of course, it is essential to always stay polite, also when you are entering a discussion. But really, the examiners are there to strengthen your thesis, so they are in a way trying to let you grasp how you can make your thesis even stronger, and you are the one who is the real expert in what you have investigated, you know the processes you used to get to your main conclusions.