Day By Day

Thursday, May 05, 2005

Video Games Boost IQs -- Or Do They?

From Wired Magazine:

Dome Improvement

Pop quiz: Why are IQ test scores rising around the globe? (Hint: Stop reading the great authors and start playing Grand Theft Auto.)
By Steven Johnson

Twenty-three years ago, an American philosophy professor named James Flynn discovered a remarkable trend: Average IQ scores in every industrialized country on the planet had been increasing steadily for decades. Despite concerns about the dumbing-down of society - the failing schools, the garbage on TV, the decline of reading - the overall population was getting smarter. And the climb has continued, with more recent studies showing that the rate of IQ increase is accelerating. Next to global warming and Moore's law, the so-called Flynn effect may be the most revealing line on the increasingly crowded chart of modern life - and it's an especially hopeful one. We still have plenty of problems to solve, but at least there's one consolation: Our brains are getting better at problem-solving.
Why should this be?

Flynn has his theories, though they're still speculative. "For a long time it bothered me that g was going up without an across-the-board increase in other tests," he says. If g measured general intelligence, then a long-term increase should trickle over into other subtests. "And then I realized that society has priorities. Let's say we're too cheap to hire good high school math teachers. So while we may want to improve arithmetical reasoning skills, we just don't. On the other hand, with smaller families, more leisure, and more energy to use leisure for cognitively demanding pursuits, we may improve - without realizing it - on-the-spot problem-solving, like you see with Ravens."

When you take the Ravens test, you're confronted with a series of visual grids, each containing a mix of shapes that seem vaguely related to one another. Each grid contains a missing shape; to answer the implicit question posed by the test, you need to pick the correct missing shape from a selection of eight possibilities. To "solve" these puzzles, in other words, you have to scrutinize a changing set of icons, looking for unusual patterns and correlations among them.

This is not the kind of thinking that happens when you read a book or have a conversation with someone or take a history exam. But it is precisely the kind of mental work you do when you, say, struggle to program a VCR or master the interface on your new cell phone.

Over the last 50 years, we've had to cope with an explosion of media, technologies, and interfaces, from the TV clicker to the World Wide Web. And every new form of visual media - interactive visual media in particular - poses an implicit challenge to our brains: We have to work through the logic of the new interface, follow clues, sense relationships. Perhaps unsurprisingly, these are the very skills that the Ravens tests measure - you survey a field of visual icons and look for unusual patterns.

The best example of brain-boosting media may be videogames. Mastering visual puzzles is the whole point of the exercise - whether it's the spatial geometry of Tetris, the engineering riddles of Myst, or the urban mapping of Grand Theft Auto.

For someone who has spent much of his adult life working in educational institutions, this is fascinating stuff. I have noted, however, that in general writing skills, especially among male students (who spend significantly more time playing video games than do young women) have declined. I have also noted that a lot of instructors have been modifying their courses to make them more visually stimulating and thus more like video games. I would also note that most video games are tied to frequent reward structure -- solve a puzzle or kill a critter and get a boost -- that could well translate into the experience of test taking.

Read the whole article here.

On a related note there's this:

SAT Essay Test Rewards Length and Ignores Errors

By MICHAEL WINERIP

CAMBRIDGE, Mass.

IN March, Les Perelman attended a national college writing conference and sat in on a panel on the new SAT writing test. Dr. Perelman is one of the directors of undergraduate writing at Massachusetts Institute of Technology. He did doctoral work on testing and develops writing assessments for entering M.I.T. freshmen. He fears that the new 25-minute SAT essay test that started in March - and will be given for the second time on Saturday - is actually teaching high school students terrible writing habits.

"It appeared to me that regardless of what a student wrote, the longer the essay, the higher the score," Dr. Perelman said. A man on the panel from the College Board disagreed. "He told me I was jumping to conclusions," Dr. Perelman said. "Because M.I.T. is a place where everything is backed by data, I went to my hotel room, counted the words in those essays and put them in an Excel spreadsheet on my laptop."

In the next weeks, Dr. Perelman studied every graded sample SAT essay that the College Board made public. He looked at the 15 samples in the ScoreWrite book that the College Board distributed to high schools nationwide to prepare students for the new writing section. He reviewed the 23 graded essays on the College Board Web site meant as a guide for students and the 16 writing "anchor" samples the College Board used to train graders to properly mark essays.

He was stunned by how complete the correlation was between length and score. "I have never found a quantifiable predictor in 25 years of grading that was anywhere near as strong as this one," he said. "If you just graded them based on length without ever reading them, you'd be right over 90 percent of the time." The shortest essays, typically 100 words, got the lowest grade of one. The longest, about 400 words, got the top grade of six. In between, there was virtually a direct match between length and grade. [Emphasis mine]

He was also struck by all the factual errors in even the top essays. [Emphasis mine]

Read the article here.

It appears that even as scores are improving, the skill sets being tested are changing, or the standards by which they are evaluated are changing. Of course, this all ignores the persistent question of just what it is that "g" measures.

Maybe the Ravens Test is a good measure of how well you play video games, or how well you do in visually intensive classes that mimic video games, or take tests that to some extent replicate the reward structures of video games, but measure poorly how well a student can assemble complex sets of information into a coherent form and communicate that result.


No comments: