[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

examining the tests




If you really want to know how 'accurate' a test is, you have to interview children about why they answered the way they did.

Clifford Hill and Eric Larsen point out that scholars don't get involved in test analysis because of the difficulties in publishing close analysis of material from published tests. They're not going to spend a lot of time scrutinizing something when they can only publish their work in summary form--without citing exact test passages.

Hill & Larsen were offered test material that had been pilot-tested for the second edition of the Gates-MacGinitie Reading Tests but which was subsequently not used. This material was designed for use at 3rd grade level. Statistical data were available on responses of representative African American and European American children to this test material. So they focused on test units where the gap in performance was especially pronounced. They broadened this concern with ethnocultural differences to include Latinos and Asian Americans.

They examined the test material from 3 perspectives:

a) a linguistic approach combining the microlevel analysis of traditional linguistics with attention to such macrolevel concerns as the logical and pragmatic relationships between propositions;

b) a genre approach, utilizing a relatively large corpus to gain an understanding of the general structure and function of the material used on reading tests;

c) a discourse approach seeking to understand not only what is on the page but what goes through the heads of different children as they read and respond to the page.

In the book they provide reading passages and questions, so the reader can sit there thinking, this seems OK. And THEN read a kid's response to interviews on why he answered the way he did. The children's responses are stunning. And the reader is left with the realization that the test is NOT testing reading ability but something else altogether. We see that kids have a remarkable ability to fit new material into their own existing shema.

One thing the researchers did was probe what children KNOW about some particular feature (real-world schema) and then proble what children DO when a test unit is presented in a different way (textual schema)

What comes out is that children's schema are so different from adults'--and adults wrote these tests. This has been a frequent complaint of mine: the reading passages seem OK. But the questions are loony, decidely not from a child's perspective.

The study is far more complex than I'm indicating here but it just blew me away, making me realize the high degree of sophistication (not to mention time and money) required to critique reading tests. The authors received National Institute of Education funding to do this. I wish they would publish a popular version.

Children and Reading Tests, Clifford Hill and Eric Larsen, Ablex, 2000

It's very pricy but I recommend it. It is fascinating reading.

Here are a couple of articles online by Clifford Hillt:

http://www.tcrecord.org/Content.asp?ContentID=10662

http://www.tcrecord.org/Content.asp?ContentID=10663
======

Susan
http://www.susanohanian.org