Page 159

©2010 The Greaves Group, The Hayes Connection, One-to-One Institute

Crosstabs Availability Cross tabulations with these demographic data are available at no charge for purchasers of the report. Please contact survey@projectred.org for a copy of the 317-page Word document.

Appendix B: Research Methodology and Data Analysis

• Since only about 1% of the responding schools are using all the key implementation factors, what can be learned about their methods, processes, and resources, and can we verify their selfreported success? We also experienced many challenges while conducting this research that point to improvements for future Project RED studies, including:

Research Challenges It could be argued that the self-reported data on outcomes, such as test-score improvement, dropout rate reduction, and course completion, is self-serving and that respondents to a survey on “technology-transformed schools” might be biased in favor of reporting strong outcomes. Our efforts, therefore, have been to look within the survey population for differences, including the differences between 1:1 schools and those with higher ratios—differences that oen turned out to be startling. Deeper, more detailed study, ideally including automated system detection of student usage linked to education outcomes, would be enlightening. It could also be argued that this respondent data set is self-selected and therefore not representative of all schools. It is certainly likely that the respondents are biased in favor of technology and responded to the survey as technology enthusiasts. However, the public school respondent base is surprisingly representative of the public schools universe, as illustrated in the above charts.

Recommendations e Project RED study, like most research, has revealed many questions for further investigation, including the following: • Why do schools that report the use of gaming, simulations, collaborative tools, and virtual field trips also report greater educational outcomes? • What are teachers and students actually doing in technologyintegrated intervention classes that is leading to the success reported by respondents?

• Several questions have multiple variables imbedded. In the future, we will simplify each survey question so that it reflects only one variable, for ease and clarity of analysis. • We originally intended to study technology-rich school environments, and our survey was designed to address that population. However, our respondents are more diverse. In the future, if we continue to survey a broader population, we will reword some of the survey questions and potential responses to account for schools without robust technology implementations. • Ideally we need to find ways to verify self-reported data. Human collection of data in the field is very expensive, so we will look for automated ways to at least collect student usage data rather than relying on self-reported data. • Our survey respondents are diverse and include all grade levels and school configurations. Although our total population is reasonably large, the population within each subgroup was too small to validate many serendipitous findings. Follow-up research could be conducted on specific demographic populations, grade levels (elementary, middle school, high school), or school types (private, public, charter, etc.). By narrowing the focus of future studies, it may be easier to garner a large enough population to validate the findings.

149

Projectred thetechnolgyfactor  
Projectred thetechnolgyfactor  
Advertisement