Real Learning in Virtual Worlds - CHAPTER 3: Research Design

From RiskWiki
Jump to: navigation, search

CHAPTER 3: Research Design

3.1 Introduction

This study measured learning outcomes through the achievement scores of a multiple choice post-quiz at two cognitive levels of Bloom’s Factual Knowledge: Remember and Understand for two different lectures delivered in the virtual world of Second Life.


This chapter will discuss the research design of this study along with the researcher’s theoretical assumptions, environment design, lecture material design and analysis methods used in producing the results discussed in the next chapter of this thesis.


3.2 Problem Statement and Research Hypothesis

The problem of this study was to determine the difference in learning outcomes between two randomly selected groups that attended the same lecture in a 3D virtual world using differing methods of delivery. Group 1 received a 2D slide show with pre-recorded audio in a lecture room setting (emulating a classical lecture in a 3D virtual world space) and group 2 received the same lecture augmented with appropriate 3D objects in an appropriately modified virtual 3D theatre space. Both were delivered in the virtual world of Second Life. The research investigated whether a difference in the delivery method (the addition of interactive “life size” 3D models), where instructional design, timing, content and environmental setup are otherwise the same produces different learning outcomes with respect the two identified cognitive levels.


To carry out this study the following hypothesis was formed:

Learning outcomes are not independent of the delivery methods in a virtual world, in that varying the delivery method between 2D and a 3D presentation results in a significant difference in the post-quiz achievement scores of a participant in relation to Bloom’s cognitive process of factual knowledge of ‘remember’ and ‘understand’.


3.3 Research Rationale

In spite of the extensive current efforts of many institutions and educators to establish a virtual presence and adapt delivery of courses to this newly emerged generation of mass market virtual worlds, little (if any) formal and structured analysis has been undertaken by researchers to assess the comparative cognitive affordances of learning delivery methods in these spaces.


An anecdotal assessment of delivery methods in university campuses and training rooms within Second Life showed a preponderance of virtualised traditional lecture rooms – complete with front facing chairs, projection screens and even lecterns. The implication is therefore that a significant volume of current delivery in Second Life (at least) is therefore merely virtualising traditional real world delivery. The question arises, however, in a space capable of delivering highly interactive collaborative learning and 3D simulations, potentially for lower input costs than would be required in the real world, is the traditional chalk and talk approach the most appropriate?


There was a significant incentive to distinguish the effectiveness of these two learning approaches. The comparative costs, expertise and effort required to utilise a set of pre-prepared real world slides with audio and present them in a virtualised classroom is essentially the same as that required for real world delivery (at least in the Second Life virtual world). Even with Second Life’s simplified and efficient 3D building editor and scripting language, a 2D slide show based presentation with audio narration can be imported or streamed into Second Life and presented, for a fraction of the cost (in time) for course preparation and level of sophistication of learning materials, and skill set required where an interactive 3D simulation is built and utilised. Distinguishing between these two learning approaches would assist educators determine whether the extra cost (of construction) and time (in design and preparation) involved in the development of 3D instructional learning materials is worth the effort as it produces better learning outcomes.


3.4 Research Method

3.4.1 Theoretical Assumptions

Previous research into education within virtual worlds can be divided into two main areas. Research that assesses the affordances of the environment to be used as an educational tool (Dickey, 2003; Gonzalez, 2007; Martinez et al., 2007; Youngblut, 1998) or research that compares the virtual world learning outcomes to that of real world learning methods (Kurt, Mike, Jamillah, & Thomas, 2004; Mania & Chalmers, 2001; Youngblut, 1998).


The former usually takes an interpretive research approach the latter a positivist research approach. From a purist’s standpoint, these two approaches are at opposite ends of the scale in their theoretical assumptions. This, in turn, affects how the researcher approaches, conducts and analyses their research data.


In an interpretive research approach the researcher adopts an investigative approach to analyse and ‘understand’ the conceptual meaning of the social construct. This approach to research is one of total immersion, experiencing the research from an insider’s view, where the researcher plays a social actor within the social construct (Klein & Myers, 1999; A. Lee, 1991; Orlikowski & Baroudi, 1991).


A positivist analyst takes a very different approach than that of interpretive analyst. Positivist research follows principals such as (A. Lee, 1991; Orlikowski & Baroudi, 1991):

  • the researcher is independent of the research,
  • the researcher is inquiry value-free,
  • a linear cause-effect relationship exists and is verified and tested by deductive logic and analysis methods.


Without passing judgement on the merits of either approach, this research has generally taken a positivist research approach using a classical experimental design method (Neuman, 2006). A direct consequence of this decision was that a ‘laboratory’ first had to be created in the virtual world that could enable the delivery of the lectures under controlled experimental conditions. We will explore this laboratory in this chapter.


3.4.2 Research Study

A virtual learning campus was set-up in the virtual world of Second Life where participants were randomly allocated into two groups to participant in either:

  1. 2D Slide Show Lecture: Slides and audio in a class room setting
  2. 3D Augmented Lecture: Slides and audio augmented by ‘life size’ 3D objects and simulations, in a class room setting


By ‘life size’ we mean that the 3D objects seemed large than the participant in the 3D space and large enough for the participant’s avatar to walk on and around them. The lecture was on ‘The Physics of Bridges’ that was presented using identical subject matter, audio and slide times. The only difference being the presence of the 3D objects and the minimum necessary environmental changes to allow avatar interaction with the 3D objects.


Before the lecture participants were given a pre-quiz and afterwards a post-quiz to test the learning outcomes of each group with respect to Bloom’s factual knowledge of ‘remember’ and ‘understand’, and a survey collecting qualitative data about the experience. Both groups received identical pre and post quizzes and surveys. The questions in the pre-quiz differed from those in the post quiz. A summary of the experiment design is provided below (Table 7):


Research Design Summary
Research Design Classical Experimental Design
Sampling Random without replacement (i.e. Avatars were prevented from taking either quiz more than once).

111+ selections.

Random Assignment Yes
Independent Variable Learning Delivery Method

Virtual 2D Slide Show Lecture vs. 3D Augmented Lecture

◦ Course Delivered: The Physics of Bridges

◦ Time 20 minutes for both

Groups 2D Group: 2D Slide Show Lecture

3D Group: 3D Augmented Lecture

Dependant Variable Cognitive Learning Outcome

Post-test achievement scores measuring the lecture objectives of Bloom’s:

◦ Factual knowledge of Remember Cognitive process ◦ Factual knowledge of Understand Cognitive process

Instrument Pre-Test

Test current factual knowledge of topic before course delivery

Post-Test & Survey

Retest factual knowledge for ‘Remember’ & ‘Understand’ after course delivery

Survey of participant’s learning experience

Table 7. Research Design Summary


3.5 Research Population

The population and frame for inclusions was the total residences in Second Life which consists of 16,318,063 million users (60 day logons 1,344,215 million) with demographics of 59% male and 41% female, where the highest group at 35% are aged between 24-34 years with a total age population being over 18 years of age. The majority of Second Life residences, 39%, live in the United States of America. Appendix I: Second Life Demographic provides a more detailed breakdown of these statistics (Linden Lab, 2008b).


It was decided to use only current in world users (rather than recruiting new users to participate in world) to avoid the weaknesses of previous research studies that was discussed in Chapter 2, where the participants were learning a new toolset rather than the learning-material presented (Martinez et al., 2007; Youngblut, 1998).


3.6 The Virtual Learning Environment

The virtual world Second Life was chosen over other virtual world environments in light of the discussion provided in Chapter 2 concerning Architecture Considerations and the review of Educational Research in virtual worlds. Second Life currently provides many benefits over other virtual worlds for open access to learning due to the capabilities of its toolset that simplify the rapid import of 2D materials and construction of 3D interactive environments. Second Life has powerful scripting and modelling tools that come standard as a part of its interface that provide a vast range of approaches with which to create the virtual learning environment. Lastly, as noted in Chapter 2, the take up by tertiary institutions of Second Life for education purposes worldwide numbers in the hundreds.


In the section that follows we will discuss the virtual world learning environment (the ‘laboratory’) that was built in Second Life in order to conduct this research experiment.


3.6.1.1 Building the Virtual Learning Environment: Design Considerations

There are two general approaches to the design layout of a virtual space (Corbit, 2002). One separates places within the space into discrete areas where users move around using portals (known as teleports in Second Life), the other is more representative of the real world where users navigate to different places using such things as pathways between buildings or rooms within the virtual space. Both of these constructs offer advantages depending upon the circumstances for example, the former method of using portals offers a more simple method for the user to navigate the space easily and quickly whereas if one wanted to assist the user in obtaining a sense of placement, presence and collaboration within the virtual environment then latter may be more appropriate (S. Clark & Maher, 2006) where the user is encouraged to explore the virtual space in order to form a relationship with the environment (Corbit, 2002).


This virtual learning environment was built largely around the first approach where a series of rooms were built and participants navigated the environment using teleports in order to complete the appropriate stage within the experiment, but with the rooms themselves emulating a real world environment with chairs for sitting, lecture rooms with projection screens and foyers, teller machines for delivering participant fees, etc.


The use of teleports not only offered simplicity for navigation but also enabled the control required over the steps in the process for the experimental design approach taken in this research. Teleports allowed the environment to be easily automated for participants to operate without the intervention or the assistance from the researcher so as to uphold the positivist research approach and remaining unbiased and inquiry free, and independent of the experiment under study (Orlikowski & Baroudi, 1991). Furthermore, the use of distinct purpose specific and separate rooms connected only by teleports was also indicated due to technical and security reasons that will be discuss later in the System Controls section below.


A further consideration was given to the construction of the rooms themselves, including the look and content of each room. Bellman and Landauer (2000) believe that a key question of the implementation and application of a virtual world is decide what reality should be made virtual by incorporating “functional realism”. Functional realism is purpose built realism that maintains sufficient realism for illusionary effects for presence and immersion but does not support the goal of absolute realism. Absolute realism in most instances, they believe, only distracts from the real objectives of the environment. For example, implementing window scenes in a university lecture room that have passing cars, jets flying though the sky and construction to a neighbouring building may be a realistic scene in the real world but in a virtual world it would only distract the students from their learning objectives. Applying functional realism not only provides focussed design but also enhances the virtual world by only including key components and excluding any adversities that may be disruptive in real world. [24]


This virtual learning environment build was based upon a real world setting, using a theatre theme, in rooms that were self contained with only the essentials elements included in order to complete the learning task at hand.


3.6.1.2 Virtual Learning Campus Overview

The overall virtual learning campus consisted of a Welcome Room, a Pre-Quiz Room, 6 Lecture Room complexes (containing an arrival foyer, theatre, exit foyer and theatre control room), a Post-Quiz/Survey Room and a central Control Room; Figure 49 provides an overview of the process flow of the virtual learning campus.


The starting area for all visitors was the Welcome Room, in this room the participant could read about the research, the rules, authority, and standards, etc. From this room a participant could take a teleport to the Pre-Quiz room. On arrival avatar identity keys were automatically recorded.


After completing the pre-quiz in the pre-quiz room participants were paid a minimum amount for attending and they could decide either to leave the research project, or continue onto a lecture. On commencement and completion of quizzes avatar identity keys were recorded.


There were 6 Lecture Rooms divided evenly into 2 types of lectures – a 2D audio-slide show presentation or a 3D augmented audio-slide show presentation. Each lecture theatre could hold up to 18 seated participants and were timed to commence every 10 minutes in pairs.


If participants continued onto the lecture their completion of the pre-quiz was automatically verified and they were randomly allocated on teleportation to either one of these lectures. Once the lecture completed they could then teleport to the Post-Quiz/Survey Room to be tested on their learning outcome and surveyed on their experience and finally they were paid for their participation in the research project.


This entire process took approximately 30 minutes for the participant to complete.


The entire virtual campus build time took approximately one man month to build [25] with the 3D presentation content taking approximately 3 times longer to build than the 2D presentation content (approximately 3 days to build for the 3D presentation and 1 day for the 2D presentation).


In the section that follows a detailed view of each room is provided along with the function of the room.





Figure 49. Environment: Virtual Learning Campus Flow Chart



3.6.1.3 Welcome Room

The Welcome Room provided the entry point into the virtual campus (Figure 50). Here the participants were provided information about the research and if they decided to participate what could be expected of them within the research experiment.


This room contained four large wall signs and four smaller floor signs in each corner.


The wall signs provided the following information (see Appendix C: Welcome Room Information Content for more details):

  • The aim of this research;
  • What can I expect?
  • How long will it take?
  • Payment?


The floor signs provided the participant with a web link to the research explanatory statement (see Appendix C: Welcome Room Information Content for more details) and a virtual note card providing them with the welcome room information that they could hold in their inventory to take away from the research location.


If the participant decided to take part in this research then they took a teleport (the gold rings partially visible in the image) from this room, which transported them to the Pre-Quiz Room.


Figure 50. Environment: Welcome Room



3.6.1.4 Pre-Quiz Room

The Pre-Quiz Room was a common area where all participants were given a Pre-Quiz to obtain their level of knowledge of the subject prior to the delivery of the lecture.


A participant would be teleported from the Welcome Room into the centre of this room and provided with instructions by the large sign on the main wall to be seated in order to take the pre-survey (Figure 51, Left). Once seated a web-link would be provided to them to take the pre-quiz. This web-link was connected to a survey engine that operated over the internet and stored details into a database outside of the Second Life environment. The survey database recorded the participant’s answers to the pre-quiz along with other details such as the participant’s avatar key (the unique identify of the Second Life user). The avatar’s key was used to verify that the participant had completed the pre-quiz prior to payment and teleportation into the next scheduled lecture.


Once the participant had completed the pre-quiz they could collect part payment for completion of this stage of the research from an ATM along the back wall (Figure 51, Right) and then could use a teleport, situated next to the ATMs, to transport them to the next scheduled lecture. The lectures were scheduled every 10 minutes for both the 2D and 3D presentations. If the blue beam on the teleport was displayed then this showed the participant that the next lecture was available for them to teleport. Timers beside the ATMs showed the time until the next lecture. On teleportation a participant was randomly allocated to either a 2D or 3D lecture.


Figure 51. Environment: Left Pre-Quiz Room, Right ATMs & Teleporters


3.6.1.5 Lecture Theatre

The participant would arrive in the foyer of the lecture theatre where they were instructed via floor signs to switch on their audio and video controls and to be seated inside the lecture theatre (Figure 52).


The slide presentation was delivered using streaming in world web-technology where PowerPoint slides were constructed and saved as html files and streamed into Second Life using an in world constructed HTML viewer. Audio streams were also recorded and synchronised to each of these slides throughout to the presentation.


Figure 52. Environment: Lecture Theatre


Both the 2D and 3D theatres were setup essentially the same and delivered within the same time frame: which took approximately 20 minutes of instructional delivery. The only variable that changed was the presence or absence of 3D objects in the delivery method of the presentation.


In the 2D presentation a participant would continue to be seated to watch and listen to the 2D lecture (Figure 53, Left) throughout the lecture. In the 3D presentation the participant would commence the session seated, but on commencement of the lecture a room would open up behind the front 2D presentation screen and the participant would be automatically transported in their chairs and dropped into the 3D presentation space to view the 3D slide show presentation in a specially designed 3D viewing area (Figure 53, Right). Participants in the 3D presentation were then left standing in this space and were able to move around in the 3D space if they wished. In the 2D mode the front facing projection screen displayed the slides, while in the 3D space, the 2D slides were projected on the walls around the 3D viewing space, with the 3D objects created and removed automatically in synch with the slides and audio in the centre of (and around) the 3D viewing space.


Figure 53. Environment: Learning Delivery Method


Careful consideration was given so that both groups obtained the same instructional information. The only exception was that the pictures contained in the 2D slide presentation was translated into 3D form and either rotated and animated, or positioned for ‘walking on’ or exploration in front of the participant.


Once the lecture had completed the participants for both groups were instructed to move to the exit foyer and teleport to the next phase of the research project via teleports located in the exit foyer. The entrance to the exit foyer and the teleports therein were only switched on after the last slide had been delivered (Figure 54).



Figure 54. Environment: Lecture Room Teleporters



Each lecture theatre contained a hidden control room and separate bank of teleports (restricted to the administration avatar) connecting them and the central control room that allowed for independent movement and invisible monitoring of the lecture rooms, and contained the control system and communication devices for that lecture theatre.





3.6.1.6 Post-Quiz Room

The final phase for the participants was to take a post-quiz and survey. This room operated the same as the Pre-Quiz room.


The Post-Quiz Room was a common room where all participants would be teleported into the middle of the room after their lecture. A participant would be instructed via the main sign on the wall to be seated in order to take the quiz and survey (Figure 55). Once they had completed the quiz and survey they were then instructed to go to the back of the room to collect their payment from an ATM for the final payment for their research participation. The survey engine would note they had completion of this survey and only then allow payment if completed.


Figure 55. Environment: Post-Quiz Room




3.6.1.7 Control Room

At the centre of this system was a Control Room. The Control Room was responsible for managing the 28 public teleports as well as containing separate teleports for members of the administration team. At any one time a member of the administrator team could bypass the controls contained within the system and move to any room within environment (Figure 56).



Figure 56. Environment: Control Room


3.6.1.8 System Controls

In the design consideration section it was mentioned that this environment was best setup using separated rooms with teleports to navigate the system. This decision allowed for an increase in security as well as allowing the use of teleports to operate as control gates.


Within Second Life you can use what is called roaming camera mode to navigate around without moving your avatar. A person can use this mode to move around to view other locations within a definable distance and even operate controls like the sit command therefore providing a security risk that a participant could bypass steps within the research process. Having rooms located far away from each other at random distances in 3D space and connected with teleports prevented this from occurring. Even if a participant found of way of teleporting to a location that was out of sequence to the research process (eg they had visited before and created a landmark to teleport back or had given away this landmark to another avatar) then the teleports, seats and ATMs all communicated with a central off-world web site (containing the survey engine) which verified the proper completion of each required step and acted as a gatekeeper to stop a person from breaching the system.


At every stage when an avatar used a teleport, used a quiz seat, or used an ATM, these teleports, seats or ATMs, connected to an external database that would look-up the avatar’s key to ensure that the appropriate stage had been completed prior to allowing access. For example, a participant had to have completed their pre-quiz survey prior to entry into a lecture theatre. If they tried to breach this sequence then the teleport reported an error message and would not allow them to teleport. A further example, a participant was required to complete an entire lecture prior to completing the post-survey. The exit Lecture Room teleports were disabled until lecture finished after which a participant could take a teleport to the Post-Quiz room in doing so the participant was flagged as having completed the lecture which enabled them take the post-quiz and survey.


Other controls were built into the ATM machines so that a participant could only be paid once and also built into the survey system so that a participant could only undertake the research once (although they were allowed to attend again if they chose they just could not take the quizzes or survey again).


This design construction of the virtual learning campus allowed for an automated system that could be operated over 24 hours for multiple participants. It was also fault tolerant to possible SIM crashes with the entire system to be able to automatically restart and recover correctly unattended.


Lastly, driven entirely by a specially designed control language in replaceable text files, the design made for an easily modifiable and manageable system with minimum scripting change to introduce any new rules. An entirely new lecture and testing set can be loaded into the system in less than 5 minutes (once the content has been written or built).


3.7 Learning Task Design

3.7.1 Subject Matter

The subject matter that was chosen was the Physics of Bridges. This topic was chosen both for its familiarity (everyone knows what a bridge is) and obscurity (they don’t generally know as much as they might initially believe about the detail of how they work) and because the content could be easily adapted for both forms of delivery. The level of difficulty was aimed at approximately a year 12 level high school student. The content of information was mainly sourced from academic and government information web-sites. Appendix D:Instruction: Slide Presentation contains the delivered presentation along with a references list on the last page of this presentation.


3.7.2 Instruction Delivery

A virtual learning system, no matter how good its delivery design is only as good as the instructional design of the learning task. As discussed in Chapter 2 Learning and Instructional Design Theory, the instruction methods used to assist in the delivery and assessment of the course was Gagne’s Nine Events of Instruction and the revised Bloom’s Taxonomy Cognitive domain.


This section provides details of how both the 2D and 3D materials were constructed, for the differences within these deliveries refer to section 3.6.1.5 Lecture Theatre in this chapter.


3.7.2.1 Gagne

The theme of this lecture was how the various bridge designs handled the key forces of tension and compression. A variety of bridge designs were explored with respect to these two forces.


Gagne’s 9 stages of instructional delivery were provided for as follows:

  1. Gaining Attention (Reception): This stage grabs the attention of the participant. A slide show was given while participants arrived in the theatre prior to the commencement of the formal presentation that contained a variety of bridge structures that were the ‘best of’ bridges along with music to motivate and excite the participant for the lecture that was to follow (see Appendix E: Pre-Presentation Slide Show).
  2. Informing Learners of the Objective (Expectancy): This stage informs the participant what new knowledge they can expect to learn. The 2nd Slide obtained the objectives of the presentation (see Appendix D: Instruction: Slide Presentation). These objectives were also written in conjunction using the revised Bloom’s taxonomy.
  3. Stimulating Recall of Prior Learning (Retrieval): This stage tries to place the new information that will be delivered in the form of current knowledge so that they can relate better to the newly presented information. Every slide that introduced a new bridge structure contained a picture of a real bridge so that the participant could relate to real life experience to the new information that would be presented.
  4. Presenting the Stimulus (Selective Perception): This is where the learning (or new knowledge) was presented, each bridge form was presented with an overview, its relationship to tension and compression and the limitations of the bridge design. The information was chunked into a logical structure. Stages (4) and (5) are interrelated which tries to provide the participant new knowledge in a logical and meaningful context.
  5. Providing Learning Guidance (Semantic Encoding): This stage presents the information in a deeper form allowing the participant to encode the new information into their long-term memory. Here the information was presented in different forms using both pictures (and in the case of the 3D group, 3D models) and text. Furthermore, three different concepts (ie overview, tension and compression and limitations) were provided for each bridge to enhance a participant’s breath of knowledge of that bridge. The bridges were also presented to the participant from simplest to most complex so that they could gradually understand the concept of a bridge structure and its relationship to tension and compression.
  6. Eliciting Performance (Responding): This stage of instructional delivery allows the participant to ‘do something’ with their new knowledge. Given we only had 20 minutes to deliver the material this stage was not performed. If Bloom’s cognitive process of Apply was tested then inclusion of this stage would have been imperative. The researcher recognises that although time was a limitation to this study, ultimately, this stage would have been interesting to include.
  7. Providing Feedback (Reinforcement): The stage of instructional delivery is usually performed with feedback from the lecturer to confirm that the participant understood the new knowledge presented. Again due to time constraints and the type research method used (experimental design) direct lecturer interaction was not an option, so in order to hold this experiment constant for all participants’ summary slides where used. These provided a form of feedback by presenting the information again but in a different form to that that was initially used in the main body of the presentation, forcing some degree of participant thought to process the summary information (and of course, the post quiz served a similar purpose, but without the learning confirmation).
  8. Assessing Performance (Retrieval): In this research study this was the final stage of delivery where the participant’s were provided with the post-quiz to assess their learning outcome.
  9. Enhancing Retention and Transfer (Generalisation) The final stage of Gagne’s instructional delivery is to generalise and transfer the information delivered in light of new information that may be presented in future. This step was partly performed at stage (7) were the information was summarised. Transfer in normal situations (ie non-experimental) would allow the student to take away their new knowledge, ie the lecture materials. Although in Second Life this is possible as this was under experimental conditions that had to be controlled the lecture materials were not transferred to the participant.


3.7.2.2 Bloom’s

The revised Bloom’s taxonomy (Anderson et al., 2001) provided the overall learning objectives of the course content (and therefore the new knowledge presented throughout the instruction) as well as the way in which participants were tested on this new knowledge. The two learning outcomes this research assessed were ‘Remember’ and ‘Understand’ of Factual Knowledge dimensions of the revised Bloom’s cognitive process as can be seen in Figure 57 below.


Figure 57. The Revised Bloom’s Taxonomy Table: Tested Process Dimensions


Bloom defines ‘remember’ of Factual Knowledge as knowledge that is presented to participants in the learning instruction, which are the basic elements of the subject matter. For example, Bridge Types presented were: Beam, Truss, Arch and Suspension. To recall the names of these bridges is the cognitive process of ‘remember’ of Factual information. Participants either remember or they do not when tested.


Bloom defines ‘understand’ of Factual Knowledge as a means to promote retention of ‘remember’ by linking the new knowledge of ‘remember’ with prior knowledge of the participant to be able to achieve more than just remember but utilise this new knowledge in other forms like interpreting, comparing, explaining etc which is not necessarily presented to them in instruction but rather it is assimilated from the entire information that is presented to them through instruction. For example, participants were tested on hybrid bridges but were never instructed on these forms of bridges in the lecture. The participant should have been able to construct this knowledge based upon the basic bridge forms presented in the lecture.


In application of the revised Bloom’s taxonomy the researcher identified the learning objectives, defined these learning objectives in terms of one of Bloom’s 19 levels of Cognitive Process (noting that each cognitive category contains specific cognitive processes), facilitated these objectives into instruction then assessed these objectives.


3.8 Instrumentation

The instrument used to assess a participant’s learning outcome as well as their overall learning experience was in survey form. Below is the survey structure that was used in this research study (Table 8):


Pre-Survey Total questions: 8
Pre-Quiz 8 multi-choice questions
Post-Survey Total question: 32
Post-Quiz 20 multi-choice questions
Survey 2 content knowledge: self-assessment of pre & post knowledge

3 Delivery Method : self-assessment of quality of learning materials 2 Technology: Assess technical difficulties 5 Learning Experience: Assess satisfaction level in learning method

Table 8. Pre and Post Survey Structure



The survey system that was used to record the data was a web based survey system as discussed in this chapter The Virtual Learning Environment section (Figure 58).


Figure 58. Web-Based Survey System


3.8.1 Pre and Post Quiz

A total of 28 quiz questions were prepared which were divided into 2 groups of Bloom’s Factual Knowledge of ‘remember’ and ‘understand’ (see section 3.7.2.2 Bloom’s for more details for the difference between these two cognitive dimensions). A total of 8 of these questions were given to all participants as a pre-quiz and 20 in the post-quiz.


A participant was never tested on the same question twice or provided the answers for either quiz, reducing the likelihood that a participant would learn from quiz questions rather than the lecture material presented. The pre-quiz was delivered to the participant prior to the lecture (see Appendix F: Pre-Quiz) and the post-quiz and survey was delivered directly after the lecture (see Appendix G: Post Quiz & Appendix H: Survey).


In order to construct these questions Bloom’s Taxonomy provides sample objectives and corresponding assessment examples within each cognitive category. The format of the multiple choice questions contained both direct selection and cueing as the question format. For example a direct selection question proposes a statement or asks a question and provides the participant with a list from which to select an answer while a cueing question provides the participant with a sentence that contained a blank space for which the responder selects an appropriate response from a multiple choice list.


3.8.2 Survey: Learning Experience

After a participant completed the post-quiz a brief survey made up of 12 questions was given (questions 21-32) to assess a participant’s own perception of their prior and post content knowledge, the delivery method, technological constraints and their learning experience. The structure of these questions used 6 Likert scale questions (5-point scales), 1 yes/no question for technical difficulty along with a general comment to explain difficulty, 2 questions to list both positive and negative experiences they perceived about the technology as a learning tool, and 2 open-ended questions for general comments about the course delivery and the participants overall experience (see Appendix H: Survey Q21-32).


The survey was implemented in order to assist the researcher as to whether there may had been any adverse effects that may have affected a participant’s performance in completing the knowledge quiz as well as to assist the researcher into gaining a better understanding of the overall research results and a participant’s relative experiences across the two delivery methods.


3.8.3 Instrument Reliability

Kuder-Richardson Formula 20 (KR-20) was the selected reliability test for the pre and post test quiz questions due to the design of the instrument. As the pre-test and post-test were not equivalent K20 measures internal consistency on a single set of survey results (Burns, 2000; Siegle, 2008). KR-20 is widely accepted by those educators and psychologist who support the instrument reliability concept to be a satisfactory method to measure the reliably of a testing instrument (Yount, 2006).


In order to test the Likert scales in the post survey Cronbach's Alpha was used to measure reliability. Similar to K20 in concept, but Cronbach's Alpha allows for testing of data across scales. K20 requires the data to be dichotomously scored (although both in reality produce the same results on dichotomously scored data).


The overall results of the instrument reliability test were low. The problem with the instrument reliability test is that there were too few questions within each group to obtain a true value for the reliability test. The results along with a discussion of the instrument reliability tests performed are provided in Appendix L: Instrument Reliability Results.


3.9 Analysis Method

3.9.1 Introduction

As discussed in the Research Method section of this chapter this research has generally taken a positivist research approach as opposed to an interpretive research approach. A purest approach to research from either side can lead to weaknesses when interpreting results (Onwuegbuzie, 2002; Richardson, 2005; Walsham, 1995; Weber, 2004), critics argue:

  • Positivist: that this method can lead to narrow, non-innovative and repetitive thought, while failing to understand that the selection of data, the method of collection, form of quantification and the tests applied are not themselves objective processes.
  • Interpretive: that this method can lead to unresolvable propositions, contextually isolated understandings, non-reproducible observations and ideas sustainable only in the mind of the interpreter.


Thus, in order to minimise the weakness of positivist research the researcher has used triangulation. Triangulation in research can be applied in many forms; in this research it has been used as ‘theory triangulation’ as described by Denzin (1978) which involves using multiple theoretical perspectives in order to interpret the data results. Although unlike the Denzin perspective where triangulation is used as a means of avoiding bias and validating the data results this researcher’s reasoning for the application of theory triangulation is to gain a greater understanding of the results by adding range and depth to the quantitative data analysis (Fielding & Fielding, 1986; Olsen, 2004).


3.9.2 Data Processing

The survey data was extracted from the database along with survey start and finishing times of participants and processed in Microsoft’s Excel spreadsheets. After conducting a small number of trials with independent trusted respondents, not otherwise part of the assessment, to determine the minimum practical time for completion of the quiz and survey, it was decided that a cut-off time of 2 minutes would be used as the basis filtering post-surveys. Post-quiz/surveys completed under this time were examined and removed. This time was based upon how long it took the researcher and the trusted responders to read and respond to only the quiz questions at a medium speed. Each survey was also reviewed for possible fake entry of the quiz answers eg selecting the first or last value for every question for their given answers. By extracting these surveys it was hoped to lessen the chance of erroneous results.


No missing data was contained in the survey because every field except the general comments and technical comment questions were all required response fields before a quiz/survey was accepted by the system and saved to the database.


3.9.3 Software

The software used to analyse the data results was Microsoft Excel 2007 Data Analysis add-in, STATGRAPHICS Centurion (2009) which is a statistical software package similar to SPSS, StatCal developed by David Moriarty (2008) an excel spreadsheet for testing normal distribution and Del Siegle (2008) excel spreadsheet for testing instrument reliability.


3.9.4 Quantitative Analysis Methods

Quantitative research methods are a natural fit with the principles of positivist research, which requires a scientific approach to analysis. Quantitative research can be described as a process of presenting and interpreting data that follows a linear research path using logical models to measure variables and test a hypothesis that is directly linked to a cause. Analysis is performed using hard data, (i.e. numerical) but soft data (i.e. non-numerical) may also be assessed by transforming natural phenomena into numbers using quantification techniques (Neuman, 2006).


3.9.4.1 Operational Hypotheses

Quantitative analysis methods require a research hypothesis (as given early in the Problem Statement and Research Hypothesis section) to be re-expressed into operational hypotheses so that each hypothesis forms a tighter a more testable statement (Burns, 2000). From the research hypothesis the following operational hypotheses were formed:

  1. (H1): That the learning outcomes for Bloom’s factual knowledge of ‘remember’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.
  2. (H2): That the learning outcomes for Bloom’s factual knowledge of ‘understand’ cognitive process will result in a significant difference in post-quiz scores between 2D and 3D participants.


Statistical analysis requires testing be performed on a hypothesis where no difference exist thus known as a null hypothesis (H0). Since H1 and H2 are expressed in terms of differences the null hypotheses H01 and H02 respectively was tested for no significant difference. If either null hypothesis H01 and H02 measures a statistically significant result then null hypothesis of either H01 and H02 gets rejected thus accepting the probability that the results of the experiment are unlikely to be a random variation in sampling error and that the conclusions drawn from the sampled population in the experiment can be drawn for the entire research population (Burns, 2000).


The experimental data results used to test the above hypotheses were based upon participant multiple-choice post-quiz achievement scores. These multiple-choice answers were dichotomously scored (ie 0 wrong answer, 1 correct answer) and analysed as will be discussed next.


3.9.4.2 Statistical Significance

This study used the non-parametric Mann-Whitney U Test to test H01 and the parametric t-test for independent groups to test H02.. All significance tests used a critical alpha level (α) 0.05, i.e. the probability (p) that 95% of the results were not due to chance. The selection of this test was based upon the way in which the hypothesis was formed and whether the results data met the assumptions of parametric test selection.


Burns (2000, p. 155) provides a flowchart to assist in the selection of a statistical test. As can be seen in Figure 59, the highlighted statistical tests are the test options available in this research study. The test selection is based upon a combination of the data type, hypothesis statement and the sample population selection.


Figure 59. Significance Test Selection


Burns (2000) states that if a researcher has a choice between the selection of a parametric or non-parametric test it is best to select the parametric test. Parametric tests are more powerful at picking up significant differences than a non-parametric test because parametric tests not only take into account the rank order of scores but also calculate variances between these scores. The selection of a parametric test should only be chosen if the experimental data results meet three assumptions, which are that the data be – naturally numerical using interval or ratio scales, of normal distribution and homogeneity of variance.


Using Burns’ diagram above, in this study we measure the differences between 2 groups (2D and 3D) were the population was randomly selected therefore the data was in 2 independent groups. From Burns diagram[26] this research study should either use the parametric independent t-test or the non-parametric Mann-Whitney U test. If the data meets the three parametric test assumptions then a parametric test should be chosen over the non- parametric test.


Within the data analysis for significance, it was decided that the significant difference would be based upon a 2-tail hypothesis. Due to the lack of research that had been performed in this area the researcher was not able to come to a strong conclusion that either method would produce a significant difference in their test results.


3.9.4.2.1 Assumptions of Parametric Testing: Tests Performed

Prior to testing for significance the results data was tested to see if the data met the assumptions of parametric testing, that is that the data be; 1) naturally numerical using interval or ratio scales, 2) of normal distribution and 3) homogeneity of variance as provided by Burns above.


The first assumption is that the data be naturally numeric. The data type of the pre and post quiz scores was interval scaled therefore the first assumption of parametric testing was met.


The second assumption is that the data is of normal distribution. There are various methods with which you can test for normal distribution (Fife-Schaw, 2007). This research has adopted the following approach:

  • The measure skewness and kurtosis can be used to test for normal distribution. If either skewness and kurtosis departs significantly from zero[27] (±2 standard errors of skewness (ses) or standard errors of kurtosis (sek)) then the results cannot be assumed to be normality distributed (Brown, 1997).
  • D’Agostino-Pearson K2 omnibus test (K2) was chosen as the statistical test to measure whether the data does not deviate from normal distribution significantly. This test is known as the most powerful Gaussian test as it is not affected by duplicate values in the data (which the result data contains) (Fife-Schaw, 2007; Graphpad, 2009).


The third assumption is that the data between the two groups do not vary significantly. Levene's F-test was applied to measure if the standard deviation variance between the groups varied significantly (NIST, 2006).


3.9.4.3 Other Tests Performed

Other tests performed that will be discussed in the results section are statistical descriptive analysis for each group using both the pre-post quiz data and the survey data. These tests will provide further insight into the research results and the differences obtained in this experiment.


The Likert scales in the survey was treated as ordinal data and therefore where not seen to have the same variance and thus treated as 3 groups positive, neutral and negative (Jacoby & Matell, 1971).


3.9.5 Qualitative Analysis Methods

Qualitative research methods are a natural fit with an interpretive research approach. Qualitative research is a process of interpreting the data by applying ‘logic in practice’ using a non linear research path. The emphasis is on constructionism, using inductive analysis for the generation of theory. Data used in analysis is soft data, the researcher will analysis the data looking at ways in which an individual interprets their social construct (Neuman, 2006).


Unlike quantitative analysis, no hypothesis is formed at the start of a study. It is an inductive process where the main concern of the researcher is to generate and develop new theories based upon interpretation. Qualitative research analysis relies heavily on the application of phenomenological sociology, hermeneutics and ethnography in order to interpret their findings (A. Lee, 1991).


In this study we have used qualitative methods as a way to gain an understanding of the overall experience of a participant learning experience in a virtual world as well as any differences that they may have experienced in the alternative delivery methods of the lecture.


3.9.5.1 Analysis Data

The data in this research study that was analysed using qualitative analysis methods was the post-survey data (see Appendix H: Survey). This survey contained open questions to enable a participant to provided feedback on their learning experience, instructional delivery and any technical constraints that they may have had during their lecture delivery. The technical difficulty question was straight forward; if they answered yes then they could comment on what went wrong. The questions that were asked in order to understand their perception of virtual world learning and delivery method were as follows:


  • DELIVERY METHOD ASSESSMENT (Q 25) General Comment:
  • VIRTUAL WORLD LEARNING EXPERIENCE
    • (Q 30) List 3 positive experiences you had with using this technology to learn:
    • (Q 31) List 3 negative experiences you had with using this technology to learn:
    • (Q 32) General Comment:


Qualitative analysis of these questions required the application of hermeneutic method which is the process of analysing verbal conversations, text, journals, pictures etc looking for meaning in the detail and as a whole to reveal the deeper meaning contained within - i.e. ‘reading between the lines’ in order to extract meaning. Within this method a hermeneutic circle is preformed were interpretation takes an iterate approach interpreting as a whole and of its parts then reinterpreting in light of the new understanding (Klein & Myers, 1999; A. Lee, 1991).


3.9.5.2 Coding

Using hermeneutic method on the survey data as described above data was coded into patterns, themes and contextual structures in light of the research problem and literature review. Coding generally takes 3 stages in qualitative analysis – Open, Axial and Selective coding (Neuman, 2006).

Open coding was performed as a preliminary analysis to develop codes to condense data into specific meanings and themes. This process was preformed several times prior and after the quantitative analysis was preformed. Axial coding was then performed to develop possible relationships between the coded data. Selective coding, the final stage, was performed to extract major themes and general theory that emerged which will be discussed in the Results section of this paper.


3.10 Summary

In this chapter the researcher has discussed the research design that required the construction of the virtual learning campus and learning materials. The instrument used to collect the data was a pre and post quiz and survey.


This research will be applying theory triangulation, which represents a mixed method approach to the analysis. An operational hypothesis was drawn from the research problem that will be assessed using quantitative analysis methods. Qualitative analysis will be used in order to gain a better understanding of the quantitative results as well as the learning experience of participants.


The next chapter discusses the results of this research project using the methods that were discussed under Analysis Method in this chapter.




BackLinks