- Subject: Re: boycott
- From: "George N. Schmidt" <Csubstance@AOL.COM>
- Date: Wed, 29 Sep 1999 06:04:13 EDT
- Reply-to: Assessment Reform Network Mailing List <ARN-L@LISTS.CUA.EDU>
- Sender: Assessment Reform Network Mailing List <ARN-L@LISTS.CUA.EDU>
September 29, 1999
In a message dated 9/28/99 11:06:31 AM, Ddeliberto@AOL.COM writes:
<< I agree that the standards do state that tests should not be used as the
measure for determining promotion/graduation but this is not the same thing
as saying that students should not be required to pass tests as one of many
other graduation requirements (course work, portfolios. etc.). I realize
this is a very fine distinction and one that is certainly open to abuse which
is why my message includes the phrase "used improperly" in the context of
this discussion of requiring students to pass tests. I also stated that
either students should be required to pass the tests or do not make the tests
a requirement at all. What is the point of just requiring students to simply
take a test? My statement of requiring students to pass a test is in the
same vain that students are required to pass certain courses to graduate--not
merely take them. Hope this clarifies my intended message.
If anyone still feels that my view somehow violates the standards of my
profession, please advise me citing the specific standard, source, and page
number where I can take a closer look and better respond to this claim or if
necessary alter my way of thinking on this issue. >>
As noted earlier (last Spring) here and elsewhere:
1. The single point indicator has been used for high stakes in Chicago since
1996 and now in New York City. Probably, there is a lot of this elsewhere,
too, but the most data seem to be from here in Chicago and now from New York.
In both cases, media hungry mayors (and equally ruthless school chiefs) are
pushing a couple of simple minded cliches as policy -- "standards",
"accountability" and "end social promotion."
In Chicago, the result has been a radical increase in the dropout rate, and
in New York we have the recent scandal of the 8,600 summer school students
who didn't really have to go. In Chicago, the test was the ITBS and in New
York the CTBS (?, if memory serves right). In both cases, the school systems
are using percentile scores converted to "grade equivalents" to promote or
retain kids and (in several cases here in Chicago) to fire teachers and
principals in "failing" schools. In Chicago, when this practice was
challenged, Riverside Publishing supported the abuse, saying that you
shouldn't use a single point indicator for such high stakes consequences, but
if you were going to then the Iowa Test was certainly the test to use.
The test industry, to date, has not protested these abuses. Nor have the
professors (e.g., Hoover at Iowa) said anything about these abuses. Given all
of the standard warning labels on standardized normed referenced tests, this
has been awesome. Here in Chicago, if one major test authority were to
clearly and on the record challenge the unprofessional use of the Iowa "grade
equivalent" for high stakes student, teacher and school "accountability" the
whole tone of the debate would be changed. But last winter, when FairTest and
PURE (a local parent group) brought out that the Iowa test guidelines
themselves warned against this use of the tests, Riverside Publishing's PR
department milked the opportunity to plug their tests, and the professors
behind the Iowa sat in Delphic silence throughout the whole thing.
By next year, now that the New York thing has come in, the whole Bracey
Report can be devoted to ethical lapses on the part of test developers,
publishers, and professors. As I've noted before, the body counts keep rising
while the silence remains deafening.
2. There are many reasons for administering tests without "consequences" (or
at least high stakes consequences), and a good analogy might be to the annual
physical we require of student athletes. Most "pass" and the only
consequences are when problems are identified. The "tests" are simply for
screening, with a diagnostic potential.
Surveys and other things are also done the same way. Prior to the "high
stakes" era, most of our testing helped us identify student strengths and
weaknesses, not to penalize students, schools and teachers. Twenty-five years
ago, we were seriously debating paying teachers more for teaching in the
roughest inner city schools because we recognized that the job is harder.
Recently, in Chicago at least, we've been firing inner city teachers for
having been unlucky enough to devote their careers to working in places where
the test scores are bound to be low.
I don't want to get lost in analogies, so that one about physicals might
confuse us. Most of my teaching experience has been with very low achieving
9th and 10th graders. For years we used a version of the Stanford Diagnostic
(I think it was) to get a read on the various areas where our students needed
We also saw varied uses for other tests, including the "highest" ones. In a
number of inner city high schools where I worked, we required students (often
with locally generated financial subsidies -- e.g. M&M sales -- since the
cost was high) to take the Advanced Placement test if they were in an AP
course, even if we knew the students would only score "1" or "2" (yes, the
best students in certain communities can be that "low" on the AP after eight
months' instruction, and considering the demographics and other realities
that in itself can be a plus). The reason was to give them the experience of
that kind of testing. (I had inner city students who scored up to "4" on the
AP English Literature exam, but few; given the creaming effect of our magnet
schools and other factors, in our "general" high schools that was wonderful,
although those judging on some absolute scale would disagree...).
That's enough for now. Hope these observations help.
George N. Schmidt
5132 W. Berteau
Chicago, IL 60641
To unsubscribe from the ARN-L list, send command SIGNOFF ARN-L
Post a Message to arn-l: