Page 1

When Knowing More Means Knowing Less: Understanding the Impact of Computer Experience on e-Learning and eLearning Outcomes Lena Paulo Kushnir University of Toronto, Canada lena.paulo.kushnir@utoronto.ca Abstract: Students often report feeling more overloaded in courses that use e-learning environments compared to traditional face-to-face courses that do not use such environments. Discussions here consider online design and organizational factors that might contribute to students’ reports of information overload. It was predicted that certain online factors might contribute to stimulus overload and possibly students’ perceived overload, rather than information overload per se. User characteristics and a range of design and organizational factors that might contribute to perceived overload are discussed and hypotheses of how such factors might affect learning outcomes are also discussed. An experiment was conducted to test predictions that (i) students’ past online experience, (ii) the organization and relevance of online information, and (iii) the level of task difficulty affect (i) learning outcomes, (ii) students’ perceptions of information overload, and (iii) students’ perceptions of having enough time to complete experimental tasks. A total of 187 participants were tested in four experimental conditions that manipulated the organization and relevance of online material that students had to learn (ie, (i) a stimulus-low environment, where the material to be learned was presented as scrolling text, with no other stimuli present; (ii) a familiar environment, where the material to be learned was set within the borders of a familiar course Web site; (iii) a stimulus-rich or stimulusnoisy environment, where the material to be learned was set within the borders of an Amazon.com Web page (a Web site where you can search for, and buy books, videos and other products online); (iv) a PDF file environment, where the material to be learned was presented as a PDF file that resembled an online duplicate of the same material in the course textbook). Findings suggested that overly busy online environments that contain irrelevant information (ie, stimulus-rich or stimulus-noisy online environment) had a negative impact on learning for students ranked “high” on experience with e-learning technologies, but no impact on learning for other students (as measured by a knowledge test of material studied during experimental sessions). There is no doubt that online environments contain vast amounts of information and stimuli; often some of which are irrelevant and distracting. How one handles irrelevant or distracting information and stimuli can have a significant impact on learning. Surprisingly, results here suggest that overload affected only experienced students. Perceptual load hypotheses are discussed to explain what initially seemed to be counterintuitive results. This paper examines literature that considers factors that can affect learning online, strategies for how teachers can ensure positive outcomes for the technology-based classroom, and strategies for avoiding online pitfalls that might leave students frustrated or burdened with feelings of overload. Keywords: learning outcomes; overload; perceptual load; design and organizational factors of e-learning; interface design; instructional design; user experience; task difficulty

1. Introduction A serious complaint often reported by students registered in courses using e-learning environments is that they are overloaded with vast amounts of information, and that they often feel more burdened in those courses compared to traditional face-to-face courses that do not use such environments (Harasim, 1987; Hiltz and Turoff, 1985; Hiltz and Wellman, 1997; Kushnir 2004). There is a substantial body of education research that describes a range of factors that can influence one’s perception of overload and educational outcomes for students using e-learning environments (for example, Carey & Kacmar, 1997; Carver et al, 1999; DeStefano & LeFevre, 2005; Franz, 1999; Hiltz, 1986; Hiltz & Turoff, 1985; Khalifa & Lam, 2002; Lee & Tedder, 2003; Reed & Giessler, 1995; Stanton & Barber, 1992; Yang, 2000; Zumbach, 2006;). Three factors considered in this paper to be “major component parts” of e-learning are (i) user characteristics, (ii) interface design, and (iii) instructional design. These three factors are viewed to be interacting components that make up e-learning environments and they can have many associated variables that likely interact with one another, and likely impact learners in many ways. Figure 1 illustrates a model of sample variables associated with the three component parts of e-learning technologies discussed here. These components and associated variables are not viewed as a comprehensive model, rather the model simply represents a sample (of a wide-ranging list of variables that one could possibly generate) that might be particularly important for e-learning.

ISSN 1479-4403 289 ©Academic Conferences Ltd Reference this paper as: Kushnir, L., P. “When Knowing More Means Knowing Less: Understanding the Impact of Computer Experience on eLearning and e-Learning Outcomes” Electronic Journal of e-Learning Volume 7 Issue 3 2009, (pp289 - 300), available online at www.ejel.org


Electronic Journal of e-Learning Volume 7 Issue 3 2009, (289 - 300)

Figure 1: Components of e-learning technologies and a sample of associated variables Discussions here focus on only three variables associated with the three component parts of e-learning technologies: (i) student's experience with e-learning technologies, (ii) the organization and relevance of online information, and (iii) level of task difficulty. While each of the variables in Figure 1 was considered important for understanding e-learning, and while one could make an argument for a different set of equally important variables, these particular three were chosen because of their implications for understanding notions of overload as discussed here. Also, a review of the literature showed that these three variables had yet to be investigated empirically and systematically.

1.1 User characteristics Certain user characteristics that are associated with using online education technologies might affect student performance in some online situations. For example, lacking certain skills or experience might put one at a disadvantage compared to students who do not lack those skills or experience. It is important to understand who the users of a particular system are and understand how their motivation (Stanton & Baber, 1992), individual learning styles (Kraus et al, 2001; Reed et al, 2000; Weller et al, 1995) and personalities (Hiltz, 1988) might affect successful e-learning. 1.1.1 Student’s online experience It seems obvious that the incidence of overload in online education environments can be exacerbated if students lack certain characteristics and online behaviours. Consequently, learning might be adversely affected. For example, if students lack the technical skills required to participate, then they might be more susceptible to experiencing overload than those who do not lack such skills (Althaus, 1997; Hiltz & Turoff, 1985). Students’ level of experience can impact their ability to manage information, and as Reed et al (2000) and Kraus et al (2001) suggest, impact the way students approach and work with online material. As with most other learned behaviours, practice probably “makes perfect” in online situations, and how students feel about working in online environments will likely change as their level of experience changes. Burge (1994) and Hiltz (1986) pointed out that novice users likely experience overload more often than experienced users because novices have to learn, simultaneously, the course material plus how to use the technology. The experienced user needs to focus only on the material to be learned.

1.2 Interface design There are also variables related to the interface design of online environments that might affect one’s performance (variables such as the organization of information and the relevance of the information to learning). For example, Weller et al (1995) demonstrated that hypermedia helped users with the www.ejel.org

290

©Academic Conferences Ltd


Lena Paulo Kushnir

organization of information, while Dalal et al (2000) suggested that many hypermedia systems lack coherence and, as a result, users feel disoriented. Brewster (1997) demonstrated that presenting information in such a way that it is processed by more than one sense modality increased one’s capacity to process that information. 1.2.1 Organization and relevance of online information The organization of e-learning environments can have a tremendous impact on learning. It might be that some online environments are unnecessarily cluttered with irrelevant stimuli and information, or it might be that the way in which relevant information is organized affects some learners. Kushnir (2004) suggested that overly busy e-learning environments that contain irrelevant information, are over-stimulating, distracting and can “clog up” valuable (cognitive) processing resources. This is consistent with what Hiltz and Turoff (1985) suggested, that when e-learning environments contain unorganized information, students find it difficult to decipher the relevance of information, and thus feel overloaded. If students cannot decipher relevant information, then maybe irrelevant information contributes to a misconception about the amount of information with which they are presented; perhaps it contributes to “perceived overload”. How relevant the information is for successfully completing a task, and the effects of irrelevant information on learning can play an important role in students’ perceptions of overload. Systematic investigation of this issue could provide insight to the types of information that are essential for effective learning in online environments. For example, Stanton and Baber (1992) suggested that students’ levels of motivation might be affected by how much students believe that online material available to them was relevant and would help them reach their goal of completing a task. If users encounter irrelevant information, the goal of completing the task might be hindered. These authors also pointed out that students’ motivation might be impaired if they become overwhelmed by the freedom to move around in such environments (due to not knowing how to move from one place to another and, thus, feeling helpless).

1.3 Instructional design Finally, variables related to the instructional design of courses using online technologies might also affect one’s performance in such environments, for example, facilitator skills, course expectations, types of tasks completed online, and the level of task difficulty. As Althaus (1997) and Kimball (1995) indicate, a well designed instructional strategy with clear goals and expectations for participation will reduce the likelihood of users becoming confused and frustrated. It is important that good instructor facilitation be available so that users do not feel unnecessarily burdened (Hiltz & Turoff, 1985; Kimball, 1995). 1.3.1 Level of task difficulty From an instructional design perspective, it is important to understand under which conditions users of elearning technologies become so overwhelmed that they are unable to perform effectively. Carey and Kacmar (1997) investigated the effects of task complexity on group satisfaction and group performance. They suggested that people are more satisfied with such technologies during simple tasks than during complex tasks, and that when compared to face-to-face groups, online groups produced less correct problem solutions during complex tasks. Hiltz, et al (1986) also found an effect of task type, and consistent with other research, complex tasks were considered more difficult to complete in online learning systems. There is evidence that the variable of “task difficulty” interacts with other online variables. Perceptual selection and attention theories shed light on the relation between “task difficulty” and the variables of the organization and relevance of online information. The importance of attention in perception is well known in the attention literature and research focusing on divided and selective attention shows that there are some tasks that one can perform well (and others poorly) when one tries to process multiple stimuli. A widely known and unresolved debate (since the mid-1950s) involves visual selection tasks, that is, tasks in which one has to attend to specific stimuli from an array of stimuli in one’s visual field. The debate is over whether one selectively attends to specific stimuli early in one’s visual search of target stimuli, or whether one selectively attends late in the visual search of target stimuli. The first view, known as the “early selection” view, argues that perception is a limited process, and that to successfully proceed in this process, one must selectively attend to specific stimuli early in one’s visual search of relevant stimuli (or information) and ignore (or not fully perceive) irrelevant or non-selected stimuli. This view is also known as

www.ejel org

291

ISSN 1479-4403


Electronic Journal of e-Learning Volume 7 Issue 3 2009, (289 - 300)

a “bottleneck” model of attention and it was first put forth by Broadbent (1958) and later more fully developed by Treisman (1969). A second and competing view, known as the “late selection” view, argues that perception is an unlimited process, such that it is an automatic process where by selection is unnecessary until after an entire visual space has been searched, and thus after an overall perception has taken place. According to this view, late in the perception process, one would selectively attend only to specific and relevant stimuli. This view was put forth by Deutch and Deutch (1963). The early versus late selection debate has empirical support for both sides. In an attempt to resolve this debate, Lavie (1995) put forth a perceptual load hypothesis of selective attention, where she argued that perceptual load is a major determinant of early or late selection. Specifically, she argued that the extent to which one employs early or late selection depends on task difficulty (or perceptual load), such that a difficult task is considered to be higher in perceptual load than an easy task (and thus, one is overloaded). On the other hand, an easy task is considered to be lower in perceptual load than a difficult task (and thus, one is not overloaded). Lavie also argued that irrelevant stimuli in a visual selection task have no impact when a task is difficult (so when perceptual load is high, or again, when one is overloaded), whereas irrelevant stimuli influence performance when the task is easy and thus, when there is less processing (so when perceptual load is low, or again, when one is not overloaded). Lavie’s model is a resource model that suggests that, on easy tasks, you have excess resources that spill over and process automatically other irrelevant, non-target items in your visual field, whereas on difficult tasks, you have no extra resources to process irrelevant information and you focus only on the goal relevant stimuli or target information.

2. Research question, hypotheses and rationale This study investigated the following question: Does student online experience, the organization and relevance of online information, and the level of task difficulty, affect e-learning and students’ perception of overload? It was hypothesized that the organization of the online environment in which students learn, and the relevance of the information in that environment, would affect students’ learning such that those who worked in stimulus-rich (or stimulus-noisy) environments would (i) perform significantly worse on a knowledge test of material studied during an experimental session, (ii) report perceived overload most often, and (iii) report most often that they did not have enough time to complete the experimental task compared to those who worked in stimulus-low online environments. It was also hypothesized that students with more e-learning experience would (iv) perform better than students with less experience. In a preliminary study (Kushnir, 2004), it was discovered that in spite of students’ perceptions, the online components of various courses contained far less information than the face-to-face components. Yet students reported that they felt overloaded and believed that they were presented with significantly more information than was actually presented in the online course components. It was suggested that students had misconceptions about the amount of information they had received in the online environments and that students might not have experienced information overload per se, rather they might have experienced stimulus overload. It might be that the organization of e-learning environments makes it difficult for students to decipher the relevance of information and thus they feel overloaded. It is argued here that, perhaps, it contributes to “perceived overload”. For the purpose of this study, the following distinctions between information overload and stimulus overload were made: ƒ

Information overload: the presentation of too much information relative to the time one has to cognitively process the information (Kushnir, 2004)

ƒ

Stimulus overload: the presentation of too many environmental stimuli such that one is unable to process them, or the presentation of successive stimuli which are presented too quickly for one to manage (Milgram, 1970)

ƒ

A further distinction to consider is the distinction between “distraction” and “over-stimulation”. It might be that in any particular instance, students might not be over-stimulated by too much information or too many stimuli, but perhaps just distracted by unnecessary or irrelevant information. Nonetheless, distracting information gets added to the information processing queue and possibly takes up valuable cognitive processing resources which, ultimately, may contribute to students feeling over-stimulated or overloaded. So, in some cases, distracting information or distracting stimuli may lead to overload, while in others, too many stimuli (that are not necessarily distracting but rather relevant and necessary for learning) may lead to overload.

www.ejel.org

292

©Academic Conferences Ltd


Lena Paulo Kushnir

Perhaps there are variables inherent in online environments that confound students’ perceptions of information and that negatively impact their learning (variables such as experience, organization and relevance of information, and level of task difficulty). It might be that some online environments are unnecessarily cluttered with irrelevant stimuli and information, or it might be that the way in which relevant information is organized affects how some students learn. This study was designed to establish whether the organization and relevance of information in an online environment impacts learning for experienced and inexperienced online users. While past studies have investigated students’ perceptions of learning in online environments (eg, Carey & Kacmar, 1997; Carver et al, 1999; DeStefano & LeFevre, 2005; Franz, 1999; Hiltz, 1986; Hiltz & Turoff, 1985; Khalifa & Lam, 2002; Kushnir, 2004; Lee & Tedder, 2003; Reed & Giessler, 1995; Stanton & Barber, 1992; Yang, 2000; Zumbach, 2006;), few studies have empirically and systematically investigated explicit measures of learning (Chen, 2005; Dalal, 2000; Lee & Tedder, 2003). The current literature base is rich with qualitative analyses of e-learning; the quantitative analyses presented in this study add to this literature base and offer new and interesting findings about e-learning.

3. Design of study 3.1 Participants Participants were one hundred and eighty-seven undergraduate students enrolled in an Introductory Psychology course of a large urban centre university. The sample included 135 females (72%) and 50 males (27%), and there were no-responses for 2 participants (1%). Participants ranged from 17 to 50 years of age; the majority of the sample was between the ages of 17 and 25 (n=175; 94%) with the remainder of the sample between the ages of 26 and 50 (n=10; 5%), and there were no-responses for 2 participants (1%).

3.2 Procedure All participants were given 30 minutes to read and learn five specific pages from an Introductory Psychology text used in their course. Keeping screen size and resolution constant, the five pages of text were embedded within four differently organized online environments. Participants were randomly assigned to one of four conditions: (i) a stimulus-low environment, where the material to be learned was presented as scrolling text, with no other stimuli present (ie, Scrolling condition; n=49; 26%); (ii) a familiar environment, where the material to be learned was set within the borders of a familiar course Web site (ie, Psy100 condition; n=43; 23%); (iii) a stimulus-noisy environment, where the material to be learned was set within the borders of an Amazon.com Web page (a Web site where you can search for, and buy books, videos and other products online, ie, Amazon condition; n=51; 27%); and (iv) a PDF file environment, where the material to be learned was presented as a PDF file that resembled an online duplicate of the material in the course textbook (ie, PDF condition; n=44; 24%). Figure 2 provides a sample screen shot of the Amazon condition (ie, the Introductory Psychology material contained within the boarders of an Amazon.com Web page). The other conditions consisted of the same material contained within the online environments described above. The Amazon condition represented a stimulus-noisy condition where participants encountered both relevant material (ie, material to be learned) and irrelevant, and possibly, distracting material (ie, hyperlinks and advertisements on the Amazon.com Web Site that would not necessarily help students learn the required material). The PDF condition served as a baseline measure for the outcome variables since it was like reading the actual textbook (as it appears in paper format) but online. After reading the online text, all participants were asked whether they had enough time to complete the experimental task, if they felt overloaded, plus demographic-type information such as age and gender. They were also asked how they ranked themselves with regards to their computer experience (ie, beginner (just starting to use computers); intermediate; experienced; advanced (working in a professional/computer based career), and questions about their experience using e-learning systems (eg, if they had used/participated in the e-learning system that was available to them in this particular course; if they did use it, then how often; how many of their other undergraduate courses offered similar e-learning systems, etc.). The distribution of participants in each of the groups was approximately equal in number of participants, gender and age. Table 1 presents the distribution of participants in each of the groups.

www.ejel org

293

ISSN 1479-4403


Electronic Journal of e-Learning Volume 7 Issue 3 2009, (289 - 300)

Finally, all participants were given up to 15 minutes to complete a short pencil-and-paper multiple-choice test of 15 questions related to the material that they read and learned online.

Figure 2: Stimulus-rich (noisy) environment; Amazon condition Table 1: Distribution of participants in each group Gender

Group

N

Female

Male

Scrolling text (no background or hyperlinks, stimulus-low environment)

49 (26%)

35 (71%)

PDF document (duplicate of print version of course textbook)

44 (24%)

Amazon environment (text embedded in stimulus-rich/ stimulus-noisy environment) Psy100 environment (text embedded in familiar course Web site environment) Totals:

www.ejel.org

Age

No response

No res po nse

17-25

26-50

14 (29%)

45 (92%)

4 (8%)

30 (68%)

14 (32%)

42 (95%)

2 (5%)

51 (27%)

41 (80%)

9 (18%)

1 (2%)

47 (92%)

3 (6%)

1 (2 %)

43 (23%)

29 (68%)

13 (30%)

1 (2%)

41 (95%)

1 (2%)

1 (3 %)

187

135 (72%)

50 (27%)

2 (1%)

175 (94%)

10 (5%)

2 (1 %)

294

ŠAcademic Conferences Ltd


Lena Paulo Kushnir

3.3 Analyses 3.3.1 Statistical tests Univariate Analyses of Variance (ANOVAs) were computed to test for significant “group” differences and to test the effects of “experience” on “test scores” and the total number of reported “pages”. Tukey’s-B post hoc analyses and independent samples t-Tests were computed to further clarify significant tests. Logistic regressions were computed to test for significant “group” differences and to test for the effects of “experience” (ie, high/low) on the report of “overload” (ie, yes/no) and report of “enough time” to complete the experimental task (ie, yes/no). The Wald statistic was computed to determine whether the independent variables of “group” and “experience” had significant effects on the dependent variables of report of “overload” and “enough time” (Agresti, 1996; Wald, 1943). 3.3.2 Measurement of computer experience Originally a four-item scale variable (ie, 1= beginner [just starting to use computers], 2= intermediate, 3= experienced, 4=advanced [professional/computer-based career]), the measurement of computer “experience” was collapsed to form a dichotomous variable, that is, “low” computer experience and “high” computer experience. This was accomplished by combining “beginner” with “intermediate” to form the new level of “low” computer experience, and by combining “experienced” with “advanced” to form the new level of “high” computer experience. This technique was necessary due to the fact that the categories of “beginner” and “advanced” were rarely selected. The variable “experience” was used as a factor in the ANOVAs to test the hypothesis that prior computer experience interacts with the experimental manipulation in determining performance. Since the lowest option, “beginner”, and the highest option, “advanced”, were rarely selected, this would result in small sample (n) sizes for each of those cells. This would also result in unnecessarily inflated (high) variance for the small n cells, thus producing unreliable point estimates for population inferences. Similarly, “experience” was used as a covariate in the logistic regression analyses, and low frequencies of the lowest and highest response options would be problematic; again, these would produce unreliable estimates of values for the population parameters.

4. Results and discussion The predictions that students’ experience and the organization of the environment they used to learn the material were important in determining their test scores were supported. There was a significant interaction between “group” and “experience” on “test scores”, F (3, 179) = 3.821, p # .011. Table 2 and Figure 3 suggest that the source of this interaction was the poor performance of students with “high” experience within the “Amazon” condition. Although significant, this is contrary to the hypothesis that students with “high” experience would perform better than students with “low” experience, but this is consistent with the overall “group” difference predicted (ie, poorer performance in the Amazon, stimulusnoisy condition). Table 2: Descriptive analyses for the interaction between the factors “group” and “experience” on “test” performance

Group Amazon

Psy 100

Group

www.ejel org

Mean Computer Experience N Test Score

Std. Deviation

Low

28

11.07

2.054

High

23

8.74

2.508

Total

51

10.02

2.534

Low

17

10.47

2.375

High

26

11.19

2.173

Total

43

10.91

2.255

Computer Experience

295

ISSN 1479-4403


Electronic Journal of e-Learning Volume 7 Issue 3 2009, (289 - 300)

Scrolling

PDF

Total

N

Mean Test Score

Std. Deviation

Low

24

10.75

2.172

High

25

10.52

2.084

Total

49

10.63

2.108

Low

22

10.00

2.488

High

22

10.00

2.708

Total

44

10.00

2.570

Low

91

10.62

2.255

High

96

10.16

2.498

Total

187

10.38

2.387

Due to the significant interaction between “group”, “experience”, and “test scores”, each level of “experience” was analyzed and there was a significant main effect of “group” for “test scores” for the “high” experience group, F (3, 92) = 4.644, p # .005, but not for the “low” experience group, F (3, 87) = .979, p = .406. To confirm this difference, an independent samples t-test was conducted within the Amazon group and it was confirmed that students with “high” experience performed significantly more poorly on “test scores” than students with “low” experience, t (49) = 3.65, p # .001. As expected, contrasts and post hoc tests revealed that the Amazon group had significantly the poorest “test scores”.

Figure 3: Mean “test” scores for each level of “group” by “experience” The results seem counterintuitive because it was hypothesized that more computer experience would make students less vulnerable to overload, such that they would perform better than less experienced www.ejel.org

296

©Academic Conferences Ltd


Lena Paulo Kushnir

students in a stimulus-noisy environment like the Amazon condition. Presumably, more experienced students would be more efficient and find it easier to work in online environments than less experienced students. This had been suggested by others also; Burge (1994) and Hiltz (1986) argued that experienced students are likely to fare better in online environments compared to less experienced students primarily because experienced students need to attend only to the material to be learned, where as less experienced students have to attend to both the material to be learned and how to use the technology. The unresolved debate (“early” versus “late” selection of information), and specifically, the sequence of selection (Broadbent, 1958; Deutch & Deutch, 1963; Treisman, 1969), plus Lavie’s (1995) perceptual load model of attention help explain the seemingly counterintuitive results found here. This experiment was designed like a visual search task, that is, a task in which students had to attend to specific stimuli (in this experiment, the material to be learned) from an array of stimuli in their visual field (in this experiment, the background environment within which the material to be learned was embedded and organized). Students in the Amazon condition encountered a busy environment (containing irrelevant information) within which the relevant information was embedded. Their task was to learn the material in spite of the distracting environment. Consistent with what Burge (1994) and Hiltz (1986) suggest, one might consider the Amazon condition to be an overall difficult task for the less experienced student and yet a comparatively less difficult task for the more experienced student. Following Lavie’s model, the irrelevant stimuli found in the noisy, Amazon environment had no impact on the “low” experienced students since this was likely a difficult task for them, and thus perceptual load was high (ie, they likely were overloaded). According to Lavie’s model, these “low” experienced students would have no extra resources to process irrelevant or distracting information because they focused on the goal and processed only the relevant information due to early selection, as first suggested by Broadbent (1958). On the other hand, for the “high” experienced students, the irrelevant stimuli negatively impacted their performance since the task was likely a comparatively less difficult one for them and thus perceptual load was low (ie, they likely were not overloaded). According to Lavie’s model, it would follow then that the “high” experienced students had excess resources that spilled over, and therefore they processed more irrelevant stimuli due to late selection, as first suggested by Deutch and Deutch (1963). The early versus late selection debate and Lavie’s perceptual load model of attention (suggesting when one is likely overloaded) together offer a convincing explanation for the otherwise counterintuitive results found in this study. The prediction that the “group” in which students participated would have an effect on their report of “overload” and “enough time” to complete the experimental task was not supported, although there was a significant main effect of “experience” on reporting “enough time”. Table 3 shows logistic regression results for the predictor “experience”. According to the Wald criterion, the independent variable “experience” reliably predicted the report of “enough time”. Table 3: Logistic regression results for the predictor “experience” enough time

EXPERIENCE

B

S.E.

Wald

df

Sig.

Exp(B)

-.639

.318

4.039

1

.044

.528

Students with more “experience” were more likely to report that, “yes”, they had enough time to complete the experimental task, but students “low” on “experience” were more likely to report not having had “enough time”.

5. Conclusions and recommendations This study contributes significantly to our understanding of e-learning outcomes. Where other studies have measured students’ perceptions of learning, this study explicitly and empirically measured online learning. As Chen (2005), Dalal (2000), and Lee and Tedder (2003) suggest, few studies have done this. From the results here, we learned that certain user characteristics (in this study, student’s experience) interacts with important e-learning factors, specifically, interface design (in this study, the organization and relevance of information) and instructional design (in this study, level of task difficulty). Interestingly, in this study, it was the experienced students that were most likely affected by stimulus overload than the inexperienced students, possibly due to issues around selective attention and perceptual load (which in turn, was associated with either being overloaded or not overloaded). We also learned that experience impacts some students’ ability to complete tasks within a specific amount of time. The experienced students were also the ones most likely to report having had enough time to complete the experimental tasks even though they were most affected by stimulus overload in the Amazon condition. It was the www.ejel org

297

ISSN 1479-4403


Electronic Journal of e-Learning Volume 7 Issue 3 2009, (289 - 300)

inexperienced students that most often reported not having enough time to complete the experimental task within the specified time frame (a task-in-time model familiar to most undergraduate students). This study aimed to understand, link and apply existing theoretical frameworks to data that add to our understanding of the design and use of instructional technologies. This type of research linking theories to e-learning is too often absent in the education literature, and Chalmers (2000) and Kraus et al (2001) suggest that more of this type of linking is necessary so that educators can make informed, data-based decisions about the most effective implementation of technology. Practical tips that designers and teachers can take away from this research include the need to consider the impact that training can have when creating and using educational technologies. Familiar, stimuluslow environments result in the best learning outcomes. Training can increase a sense of familiarity with the learning environment, decreasing the possibility of distraction, and creating opportunities for learners to increase experience (which, as found in this study, helps with feelings of having had enough time for task completion). Also, designers and teachers should consider the impact that irrelevant and possibly distracting information can have on students, especially (and surprisingly) for those who have experience with e-learning environments. For students, this study suggests that keeping focused and goal oriented is very important, especially in online environments where one can easily stray beyond the relevant and necessary information that facilitates learning. It seems that, at least for e-learning, more (information, experience, etc.) does not necessarily mean better or more effective learning. E-learning research should continue to link theories to online learning, so that future developments of educational technologies are empirically supported and based in proven and appropriate pedagogical methods. It would be interesting to consider a “Universal System” that could accommodate both novice and experienced users. Such a system might allow individual users to have some control over the organization and format in which stimuli are presented. For example, it might be that an environment where users can engage (or disengage) high or low stimuli conditions, can better accommodate learners of varying levels of experience, different needs, learning styles, or preferences. While future research might consider such factors, it is important to remember that, as Shneiderman (1998: p. 150) notes, “...each small experimental result acts like a tile in the mosaik of human performance with computer-based information systems”, and as a good friend and colleague once told me: “...beware the experts who claim they have the final answer. The nature of scientific progress is slow but deliberate and self-corrective. No single study of complicated behaviour tells us exactly what to do” (Walters, G.C., personal communication, October 4, 2003).

References Agresti, A. (1996). An Introduction to Categorical Data Analysis. New York: John Wiley and Sons, Inc. Althaus, S.L. (1997). Computer-Mediated Communication in the University Classroom: An Experiment with On-Line Discussions. Communication Education, 46 (July), 158-174. Brewster, S.A. (1997). Using non-speech sound to overcome information overload. Displays, 17(3-4), 179-189. Broadbent, D.E. (1958). Perception and communication. London: Pergamon Press. Burge, E.J. (1994). Learning in Computer Conferenced Contexts: The Learners' Perspective. Journal of Distance Education, 9(1), 19-43. Carey, J.M., & Kacmar C.J. (1997). The Impact of Communication Mode and Task Complexity on Small Group Performance and Member Satisfaction. Computers in Human Behavior, Vol.13, no. 1, 23- 49. Carver, C. A. Jr., Howard, R.A., & Lane, W. D. (1999). Enhancing student learning through hypermedia courseware and incorporation of student learning styles. IEEE Transactions on Education, 42(1), 33-38. Chalmers, P.A. (2000). User interface improvements in computer-assisted instruction, the challenge. Computers in Human Behavior, 16, 507-517. Chen, W.F. (2005). Effect of Web-Browsing Interfaces in Web-Based Instruction: A Quantitative Study. IEEE Transactions on Education, 48(4), 652-657. Dalal, N. P., Quible, Z., & Wyatt, K. (2000). Cognitive design of home pages: an experimental study of comprehension on the World Wide Web. Information Processing and Management, 36, 607-621. DeStefano, D., & LeFevre, J. (2005). Cognitive load in hypertext reading: A review. Computers in Human Behavior, 23, 1616-1641. Deutch, J.A. & Deutch, D. (1963). Attention: Some theoretical considerations. Psychological Review, 70, 80-90. Franz, H. (1999). The Impact of Computer Mediated Communication on Information Overload in Distributed Teams. HICSS-32. Proceedings of the 32nd Annual Hawaii International Conference on Systems Sciences, p: 15 pp. Harasim, L. (1987). Teaching and Learning On-Line: Issues in Computer-Mediated Graduate Courses. CJEC, 16(2), 117-135. Hiltz, S.R. (1986). The "Virtual Classroom": Using Computer-Mediated Communication for University Teaching. Journal of Communication, 36(2), 95-104. www.ejel.org

298

©Academic Conferences Ltd


Lena Paulo Kushnir

Hiltz, S.R., & Turoff, M. (1985). Structuring Computer-Mediated Communication Systems to Avoid Information Overload. Communications of the ACM, 28(7), 680-689. Hiltz, S.R., Johnson, K., & Turoff, M. (1986). Experiments in Group decision making: Communication Process and Outcome in Face-to-face versus computerized conferences. Human communication research, 13(2), 225-252. Hiltz, S.R., & Wellman, B. (1997). Asynchronous Learning Networks as a Virtual Classroom. Communications of the ACM, 40(9), 44-49. Khalifa, M., & Lam, R. (2002). Web-based learning: Effects on learning process and outcome. IEEE Transactions on Education, 45(4), 350-356. Kimball, L. (1995). Ten Ways to Make Online Learning Groups Work. Educational Leadership, October, 54-56. Kraus, L.A., Reed, W.M., & Fitzgerald, G.E. (2001). The effects of learning style and hypermedia prior experience on behavioral disorders knowledge and time on task: a case-based hypermedia environment. Computers in Human Behavior, 17, 125-140. Kushnir, H.F. (2004). Information overload in Computer-Mediated Communication and Education: Is there really too much information? Implications for distance education, In J. Hewitt and I. DeCoito, OISE Papers in Educational Technologies, Vol. 1, 54-71. Toronto: University of Toronto Press. Lavie, N. (1995). Perceptual Load as a Necessary Condition for Selective Attention. Journal of Experimental Psychology: Human Perception and Performance, Vol. 21, No. 3, 451-468. Lee, M. J., & Tedder, M.C. (2003). The effects of three different computer texts on readers’ recall: based on working memory capacity. Computers in Human Behavior, 19, 767-783. Reed, W.M., & Giessler, S.F. (1995). Prior Computer-Related Experiences and Hypermedia Metacognition. Computers in Human Behavior, Vol. 11, no.3-4, pp. 581-600. Reed, W.M., Oughton, J.M., Ayersman, D.J., Ervin, J.R., & Giessler, S.F. (2000). Computer experience, learning style, and hypermedia navigation. Computers in Human Behavior, Vol. 16, pp. 609-628. Shneiderman, B. (1998). Designing the user interface: strategies for effective human-computer interaction. Reading, MA: Addison-Wesley. Stanton, N., & Baber, C. (1992). An Investigation of Styles and Strategies in Self-Directed Learning. Journal of Educational Multimedia and Hypermedia, 1, 147-167. Treisman, A.M. (1969). Strategies and models of selective attention. Psychological Review, 76, 283-299. Weller, H.G., Repman, J., Lan, W., & Rooze, G. (1995). Improving the effectiveness of Learning Through Hypermedia-Based Instruction: The importance of Learner Characteristics. Computers in Human Behavior, Vol. 11, No. 3-4, pp. 451-465. Wald, A. (1943). Tests of Statistical Hypotheses Concerning Several Parameters When the Number of Observations is Large, Transactions of the American Mathematical Society, 54, 426-482. Yang, S. (2000). Information display interface in hypermedia design. IEEE Transactions on Education, 43(3), 296-299. Zumbach, J. (2006). Cognitive Overhead in Hypertext Learning Reexamined: Overcoming the Myths. Journal of Educational Multimedia and Hypermedia, 15(4), 411-432.

www.ejel org

299

ISSN 1479-4403


Electronic Journal of e-Learning Volume 7 Issue 3 2009, (289 - 300)

www.ejel.org

300

ŠAcademic Conferences Ltd

When Knowing More Means Knowing Less: Understanding the Impact of Computer Experience on...  

Students often report feeling more overloaded in courses that use e-learning environments compared to traditional face-to-face courses that...