[an error occurred while processing this directive]

learning.now: at the crossroads of Internet culture & education with host Andy Carvin

[an error occurred while processing this directive]
April132007

What Goes Up Must Come Down?

Over the last couple of weeks we’ve seen the release of a pair of research reports that on the surface have nothing to do with each other. But as one report details how the U.S. is struggling to figure out the role of technology in the classroom, the other lays out how we’re falling behind technologically and competitively compared with other countries. Don’t tell me those issues aren’t related.

First, let’s talk about the recent findings from the U.S. Department of Education on the effectiveness of math and reading software. The report, mandated by Congress, involved 132 schools in nearly three dozen school districts, focusing on software used in grades one, four and six. And its findings weren’t as stellar as some would have liked. According to the report’s summary:

Test scores were not significantly higher in classrooms using the reading and mathematics software products than those in control classrooms. In each of the four groups of products-reading in first grade and in fourth grade, mathematics in sixth grade, and high school algebra-the evaluation found no significant differences in student achievement between the classrooms that used the technology products and classrooms that did not.

The report focused on educators who had been using software in their classroom for only one year, which may not be enough time for them to become comfortable with the tools. Class size appears to be a factor in certain cases as well.

The results reported here are based on schools and teachers who were not using the products in the previous school year. Whether products are more effective when teachers have more experience using them is being examined with a second year of data. The study will involve teachers who were in the first data collection (those who are teaching in the same school and at the same grade level or subject area) and a new group of students. The second-year study will also report results separately for the various products.

It’s also worth noting that they only examined the impact of 16 different software packages, though the researchers admitted “products were not a representative sampling of reading and math technology used in schools.” Might a larger sample have made a difference? The second year of the study might shed some light on this question, assuming they expand their sample size the next time around.

This particular report made news in education circles, but another study released the previous week reverberated in media reports worldwide. It was the annual Global Information Technology Report Networked Readiness Index, published by the World Economic Forum. (Don’t ask me if they abbreviate it as the GITRNRI, though I love a good acronym as much as the next guy.) The index, which has been published since 2001, examines relative economic competitiveness of countries based on multiple factors, including their telecommunications infrastructure, regulatory environment and the quality of their educational institutions. The higher your ranking, the more prepared your country is to compete internationally as new technologies come down the pike.

Generally, the U.S. has done well in previous editions of the index. While not always #1 each year running, it had previously held the spot three years’ running. This year, they got knocked off the top by Denmark. And Sweden. And Singapore, Finland, Switzerland and the Netherlands, for that matter. In other words, we’ve fallen sharply from #1 to #7 on the list, ranked just above Iceland and the United Kingdom. Granted, we’re doing well given the fact that 122 countries were on the list, the drop is still seen as a bit of a fall from grace in world economic circles.

There’s been lots of discussion online as to the cause of the U.S. drop to seventh place. Researchers involved in the index placed some of the blame on the country’s current regulatory environment and aging telecom infrastructure, while praising our “tertiary educational institutions.” Translate that: we may not be the ultimate place to do business or use a cell phone, but we’ve still got the best higher education system in the world.

In contrast, no one seemed to go out of their way to praise our primary and secondary education systems, though. Of course, it’s not the job of the World Economic Forum to keep tabs on how well U.S. K-12 schools are using edtech, but that doesn’t mean there isn’t a connection. One of the reasons the U.S. higher education system gets cited in this report and others like it is because it attracts foreign students, many of whom stay in this country and create a brain drain on their own countries. While higher ed should be proud of its ability to attract foreign students, part of me wonders if it’s a bit of a back-handed complimented as far as K-12 is concerned. International reports citing America’s educational strengths almost always focus on higher-ed, while offering faint or sporadic praise for our K-12 institutions. And it’s not because they’re trying to be polite.

There’s no doubt you can find the most advanced technology, innovation and experimentation in almost any U.S. university, but can the same be said of our K-12 schools? Is K-12 embracing technology to create the next generation of innovators that will help us remain competitive, or are we so focused on using technology to teach the basics that those opportunities fall through the cracks? Is our limited use of education technology a failure of the imagination? Can we learn from the Denmarks and Singapores of the world to understand what it means to prepare our students for tomorrow? And are we spending too much time arguing over the impact of software and Internet access on standardized test scores rather than looking at the bigger picture? -andy

Filed under : Policy, Research

Responses

But while we can say “there‚Äôs no doubt you can find the most advanced technology, innovation and experimentation in almost any U.S. university,” that technology is used for research, not pedagogy, for the most part. I don’t get the impression that universities are ahead of K-12 in the pedagogical application of technology at all.

“Might a larger sample [of products] have made a difference?”

It’s significant to note that the products selected represent a very narrow range of educational software. They focused on products that claim to improve standardized test scores and products from bigger publishers (who could support and train at numerous sites).

So the small sample size provides focus, and a clearer conclusion from results. It’s clearly not about ALL math and reading software, or as some headlines have falsely concluded, that educational techology doesn’t help at all. (CNN - “Study: No benefit going high-tech for math and reading”)

Tom makes a great point with stating that the great technology offered at the higher-ed level is not used in the “classroom,” but rather in research-oriented activities. Reading George Siemens posts (in their various locations) will tell us that there is just as much as a gap in pedagogical use of technology as in K-12.

The relationship between the two reports points to an aging philosophy on both levels, collegiate and secondary, that being the predominance of the singular expert-led classroom. Any software, and I will confess to not having read the entirety of the DOE report, used in the context of dictate and drill, will most likely show limited results. I am probably preaching to the choir here, but opening up that software to include personal learning networks and collaborative efforts among student groups, might make a marked difference.

As an former employee of a former company producing software for schools, I’ve been highly impressed by how little it offers. I should note that I am not an educator, but a PhD in cognitive psychology.

I consider the cost and amount of effort involved in implementing any of these programs to be far beyond their benefits.

I think there could be benefits in the long run, given good content and design, but so much of it is dependent on the amount of time that a program is in place and the support that the administration gives it, that the cost could easily outweigh any long term improvements.

The problem with any such program is that it can’t be measured by student over time. Measurement is over time, regardless of individual. But, during this same time, there could be huge variations in economic situations and populations in the area that is being studied.

Thus, outcomes are not indicative of the actual learning results, which not only makes it difficult to assess the utility of the approach but the utility of any specific approach.

Overall, I’d advise against these methods: they might offer promise in the long run, but to obtain any type of validation, they require long-term committment.

I think we need to wait for the next year’s results of research on the teachers who are using new technology programs in their classrooms before we critisize it. Teachers are learning to use technology with very little amounts of training or time.

Who is asking WHY we use Educational Technology in schools? The two reports above were done for completely different answers to that question. I don’t think you can compare two reports that are asking different questions.

I think that the reasons for using technology in schools go WELL beyond NCLB reading and math goals.

You’re right, you can’t compare them, but you can contrast what they represent as far as broader trends go. On the one hand, the US report is an example of the endless debate we seem to have in this country over whether edtech raises test scores. It’s a debate I simply don’t see with the same obsessiveness in other parts of the world, in which more countries are accepting the reality that getting young people exposed to digital tools will help them be more competitive in the long run. They’re totally different studies with different goals, audiences and outcomes - but they still strike me as representing two different trends that are worthy of contrast. -andy

As a school-based technology specialist, I’m on the front lines regarding this research and the questions it raises. What I try to impress on both teachers and parents is that much of what we do with students and technology shouldn’t focus on using a specific software program. A main focus with instructional technology should be teaching those critical thinking and problem solving skills that they will need to have for our new global economy - no matter what software is involved. Students today need to be able to plan and execute online research, manage projects, make informed decisions - all using appropriate technology tools. Are we effectively teaching our students the skills needed for technolgies not yet envisioned?
That should be the focus of somebody’s research….not whether a particular piece of software that will be outdated in 2 years can effectively “drill and kill” the minutiae needed for passing standardized tests!

[an error occurred while processing this directive]