December 9, 2009

Getting Yet Another Thing Wrong

It's no secret that The Feds have done little to improve student achievement through their heavy-handed meddling, but you may be surprised to learn that's not the only thing they're screwing up at your local public school:

In the past three years, the government has provided the nation's schools with millions of pounds of beef and chicken that wouldn't meet the quality or safety standards of many fast-food restaurants, from Jack in the Box and other burger places to chicken chains such as KFC, a USA TODAY investigation found.

No doubt that's because if Jack in the Box starts making their customers sick, they tend to start losing customers.  The public schools often can't lose their customers unless those customers sell their house and move or pay twice for education.

For chicken, the USDA has supplied schools with thousands of tons of meat from old birds that might otherwise go to compost or pet food.

That certainly inspires confidence when the only other uses for the food the Feds are serving to children are pet food and compost.

But don't you worry Congressman Miller is on the job.  Er, will be one the job.  Someday.

"If there are higher quality and safety standards, the government should set them," says Rep. George Miller, D-Calif., chairman of the House Committee on Education and Labor. "Ensuring the safety of food in schools is something we'll look at closely."

I guess he was too busy grandstanding at Reading First hearings for political gain instead of worry whether poor children were being fed pet food for their free and reduced price lunches. (And whatever happened to all those Reading First indictments we were promised?)

Even McDonald's has enough sense to not feed their patrons pet food.  And, they manage to serve you a hamburger for only a dollar.  A non-pet food grade hamburger I may add that has been voluntarily tested for safety at a ten times greater rate that the Miller burgers.

McDonald's, Burger King and Costco, for instance, are far more rigorous in checking for bacteria and dangerous pathogens. They test the ground beef they buy five to 10 times more often than the USDA tests beef made for schools during a typical production day.

I simply can't wait until the Feds are in charge of my health care decisions too.

December 7, 2009

Why You Can't Trust the New York Times on Education (or anything scientific for that matter)

Erica Goode, the NYT environment editor, admits the NYT doesn't understand science:

“We here at The Times are not scientists. We don’t collect the data or analyze it, and so the best we can do is to give our readers a sense of what the prevailing scientific view is, based on interviews with scientists” and the expertise of reporters...


That comment came in response to the recent Climategate scandal which the Times has not unexpectedly underreported.  But. it also explains why the Times gets so many education stories wrong too.

November 19, 2009

Things we don't know

There is lots of stuff we don't know. Lots of stuff without firm scientific support. Yet in many areas without firm scientific support, we often encounter zealous advocates who either believe we know much more than we do or are confused as to what we actually know.

The Big Bang Theory doesn't explain what caused the universe to come into being.

How did the universe come into being?  There is plenty of good observational evidence for the Big Bang theory.  But that doesn't explain how the universe came into being in the first place.  What happened before the big bang?  Science is unable to describe the universe before the Planck Epoch (when the force of gravity separated from the elctronuclear force).  Currently, we don't know what caused the big bang or how the universe came into being.

The Theory of Evolution doesn't explain how life originated.

 How did life start?  There is plenty of observation evidence for the theory of biological evolution.  But that doesn't explain how life came into being in the first place.  What happened before there were organisms?  Science is unable to explain how organisms came into being in the first place.  There is no scientific consensus on how life began.  Currently we don't know how life began.

(FYI: Intelligent Design is one argument for how the universe and life began.  It has about the same scientific support as any other argument for how the universe and life began.  That is, none. Of course, Intelligent Design isn't exactly scientific.  But then again science hasn't provided any answers yet either.)

The Theory of Global Warming is infected with politics for the time being.

In a few decades we might know whether the current scientific consensus and environmental hysteria comports with the data.  To the extent there is a consensus, the science remains shaky--far shakier than what we know about the origins of the universe and of life.  Far shakier than the consensus scientists would like you to believe.  (I think many of them don't even understand why "consensus science" isn't actually science.)

(Long Version)
We don't know how to reliably educate low-IQ/low-SES children

The science is very thin on improving student achievement outside of the elementary school years. Most theories aren't even based on actual testing of an intervention.  Most theories are based on observations of correlational data on broad proxies for variables believed to affect education (poverty, teacher efficacy, availability of free lunch, availability of health insurance, and the like).  not so much on actual interventions designed to improve or ameliorate these variables.

So what is government policy, like Race to the Top, based on?

Sunshine and lollipops mostly.

November 18, 2009

Your tax dollars hard at work

Virginia is going to analyze why there is a disproportionately low representation of minority students in gifted education.

"Virginia is proud of both the high standards of our educational system and the wealth of diversity in our communities," Governor Kaine said. "As we continue to improve on our gifted education programs in particular, it's critical we assess any disproportionate barriers to enrollment so we can ensure students of all backgrounds have the opportunity to participate."

Data reported by school divisions to VDOE show that while African-Americans make up 26 percent of the statewide student population, only 12 percent of students identified as gifted are black. Hispanics make up nine percent of the student population and five percent of students identified as gifted.

Gee, what could possibly be the reason behind this disparity? 

Well, let's see.  Giftedness is largely determined by performance on IQ tests. So, let's take a look at the relative performance between whites and blacks on IQ tests.  Better yet, let's break performance out by socio-economic status.

The chart shows a 10 to 16 point gap between white and black performance for IQ (that's about a standard deviation) across the SES spectrum.  Blacks in the highest SES decile perform about as well as the 50th percentile White (5th decile).  Even raising the SES of Blacks wouldn't close the IQ gap even if there were a causal connection.

Mightn't this explain all or most of the discrepancy?  It took about ten seconds of googling.  Virginia needs about a year and a half to find a more politically correct explanation.

The study - which is being conducted with technical assistance from the Regional Educational Laboratory Appalachia - will be completed by Spring 2010.

There is a similar IQ gap between the performance of Asians and Whites which also explains why Asians are disproportionately represented in gifted classes.  Hopefully, the study will address that problem as well.

Must be discrimination. Some virulent form of discrimination that's only present in the nasty U.S.  And Toronto too. In fact I can't find a single country in which these same IQ gaps aren't present and don't manifest themselves on achievement tests (which are actually IQ tests.  Shhhh don't tell anybody).  So maybe that theory doesn't hold up.

In any event Erin Dillon from The Quick and the Ed is looking for a solution.

Racial disparities in gifted vs. regular education classes seemed obvious enough to me when I attended public schools in Virginia. One can only hope that this study will put some momentum behind addressing those disparities.

How exactly does one address those disparities?

November 13, 2009

Pot. Kettle. Bracey.

The last and thankfully final Bracey Report attempts to analyze the research support underlying the following three assumptions about how to reform education.

1. High-quality schools can eliminate the achievement gap between whites and minorities.

2. Mayoral control of public schools is an improvement over the more common elected board governance systems.

3. Higher standards will improve the performance of public schools.

As worded, the answer to all these questions is that the research is insufficient.  But notice for questions 2 and 3 how any amount of improvement will do, while for question 1 only improvement that will "eliminate the achievement gap."  Such improvement would have to be on the order of about a standard deviation increase in non-Asian minority performance with no increase in white performance.  A tall order indeed.  In fact such a tall order, that no in-school or out-of-school intervention has ever achieved such results for the general population-- even the ones that Bracey supported and touted in this very report.

This is Bracey at his most dishonest--glaringly dishonest.  Bracey had an agenda and he didn't mind bending the facts to fit his preferred outcome.  He wasn't an honest researcher and this will be his lasting legacy.

A more honest researcher might adopt a more neutral standard of achievement such as "an increase in the performance of all students by an educationally significant amount (0.25 standard deviation)."  That would be a laudable goal and also would serve to reduce the achievement gap.  It's also the generally accepted standard in education research.

Under such a standard, Bracey would still get to criticize mayorial control and higher (national) standards as not having a sufficient research base; however, he'd have to acknowledge that there is a sufficient research base for higher-quality schools under this standard at least in the elementary school years.

Another problem with Bracey's reports is that they are peppered with his own assumptions about how to reform education that don't have sufficient research base or are contradicted by the data.  Here are a few.

Students attending American schools run the gamut from excellent to poor. Well-resourced schools serving wealthy neighborhoods are showing excellent results. Poorly-resourced schools serving low-income communities of color do far worse. (p. 2)

Schools serving low-income communities of color tend to have resources above the median school.

I said above that if there are to be more high-quality schools (or at least, “high-quality” schools in terms of high or rising test scores), they will have to be developed in low-income neighborhoods. (p. 3)
Bracey is implies that schools in higher income neighborhoods are doing a fine job educating low-income students.  They aren't

Before taking up the question of whether schools alone can remedy the achievement gap for poor children, we have to ask what is known about the effect of poverty on children. What are some of the out-of-school factors that contribute to poor children’s lower performance? (p. 4)

None of the studies Bracey, especially Berliner's, directs us to are capable of determining the causal link that Bracey implies.  Bracey then proceeds to give us a few pages of various ailments and problems associated with poverty and attempts to draw a bleak picture of poverty's causal effect on student achievement.  He has to resort to anecdote because the data a much less bleak picture.  Poverty, or more accurately low socio-economic status (SES), is correlated with low student performance.  But the amount of variance in student performance attributable to variations in SES is only about 18%.  That means that 82% of the variance is attributable to non-SES factors.  Bracey knows or should have known this, but misrepresents the data anyway.

These disadvantages all operate to attenuate achievement in schools. The question is, can “high-quality” schools alone offset them? (p. 7)

Bracey then looks at one ham-fisted study, Harlem Promise Academy, as a refutation.  He ignores the other studies which have shown results larger than the 0.18 standard deviation gap attributable to poverty effects.

Bracey does a better job with the mayoral control and high standards issues.  But the problem is that on the poverty/SES issue Bracey's non-research-based views are no better than those of the proponents of mayoral control and high standards.

Pot and Kettle meet Bracey.

November 5, 2009

Today's Quote

The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data.

-- John Tukey (1986)

Another Defense of SLA?

Joe Bires of  Ed Tech Leadership offers a defense of the SLA position paper assignment:

Your post should be titled “Why you should never publish anything on the Internet (or anywhere for that matter)”. The minute you publish something you open yourself up for attack, not just feedback.

I have no problem with your critique of Chris’s assignment as Chris is a professional putting his work out there for you to critique. I don’t agree with most of your comments, but I do feel the assignment wasn’t as clear as it could have been.

However, the lack of a rubric being posted with this assignment with clear standards makes critiquing the students writing as you do unfair. Look at your post again and code every time you put down students directly or indirectly. Clearly while you may not have wished to put down students, that is exactly what you are doing.

I believe the overarching purpose of posting these essays was to solicit feedback on the students’ ideas and not the written form those ideas took. When you look at student work and give feedback it is different than looking at the work of adult professionals and you must confine your feedback about students and their work to the scope of the assignment’s parameters (no matter your opinion of those parameters).

Your post reminds me of the quote:

“The turtle only makes progress when he sticks his neck out”

- James Bryant, educator

To conclude, if turtles took your advice on producing first drafts they would make zeroth progress because the feedback they would receive would discourage their further effort.

Downesian nemesis TracyW provides a good rebuttal to Joe's "open yourself up to attack" and "the value of feedback" arguments which all censor advocates should read.  I'll move on to the remainder of Joe's argument.

As a preliminary matter I note that Joe offers no defense to the students' writings and the deficiencies I noted.  Is there a valid defense?  I don't think there is one, but I'm happy to entertain that argument should some brave soul desire to make it.

Also, what is the point of this:

I don’t agree with most of your comments, but I do feel the assignment wasn’t as clear as it could have been.

Allow me to offer a similar rebuttal:  I disagree with Joe's comments.  And, now we've entered the surreal realm of a Monty Python skit.

What is the point of offering an opinion without substantiating it.  This is exactly what the SLA students did.  So, I'm guessing that when Joe says the assignment wasn't clear his implicit premise is that the assignment was so vague that it was acceptable for the students to just offer up a series of unsubstantiated opinions.  Joe then moves the goalposts again when he next argues that since the "overarching purpose" of the assignment was to solicit feedback, the written form of the students' work should not be judged.

This is the Tom Hoffman diminished expectations counter-argument. Here is Joe's argument made explicit.

Joe, like Tom before him, is claiming that Chris' assignment was "vague,"  and that the words Chris used in his instruction meant something other than their ordinary and customary meaning as I claimed.  The actual assignment should be understood to be commensurate with the scope of the resulting work product of the students.  The students didn't support their opinions; therefore, the assignment did not require them to provide such support.  The students' work was riddled with grammatical and usage errors and did not conform to the standard essay format; therefore, Chris' rubric could not have been concerned with this aspect of the students' work.  Moreover, my standard is an adult professional standard, not a high school grade standard.

Let's recast Chris' assignment to make it both crystal clear and in accordance with Joe's suggested rubric.

You are to write a two page opinion paper creating your vision of what school should be. The main purpose of the paper is to solicit feedback from your fellow students. You are not required to provide support and substantiation for your opinions, even though this will hinder the reader's ability to understand the basis of your opinion (to understand why you think the way you do) and will diminish the quality of the feedback. Following the standard essay format of introduction, body, and conclusion is also optional. In fact, presenting your opinion in a logical order is not required, nor do you have to separate your ideas into traditional paragraph format. Also, although the goal of the assignment is to solicit feedback, it is not important for you to communicate your opinions coherently to the reader at all so they are readily understood. Therefore, standard grammar, usage, and spelling rules need not be adhered to. Lastly, you may keep your papers real by peppering then with colloquialisms and other informalities typically associated with spoken language.

Your paper should consider the following points:


According to Joe, this was really Chris' assignment and the students' papers should be critiqued accordingly.  As a result, the students' papers are in compliance with the assignment and my critique is off-base.

It should also be clear why Chris chose to use the term "position paper" instead of all this clarified verbiage.

Nevermind that the term "position paper" is not only not vague, but also has an established meaning, And that the interpretative rule of Contra proferentem (against the one bringing forth ) dictates that we should use this established meaning.

Let me also suggest that this is an argument that progressive educators are better off not making in a public forum if they wish to achieve any credibility with the public.

November 4, 2009

D-ed Reckoning Enters Edu-blogger Hall of Fame

Beloved Uncle Jay and crazy Aunt Jay are picking the best education blogs of  2009.  But that's not important right now, what is important is:

Valerie will select ten and I will select a different ten for our 2009 list of 20 best blogs, which we hope to post by December. You will note a list of eleven blogs on the left side column of this blog [Ed. -- including your truly]. They are previous winners of this incredible honor, and so will remain posted there forever and are not eligible for the new list. I plan to add my ten selections to that left hand column, and make each annual contest a search for blogs we have not celebrated before.

I read that "posted forever" and "not eligible for the new list" as being the equivalent of being inducted into the edu-blogger hall of fame. Woo hoo.  And, I haven't even retired yet.  Although, I think I can safely continue start coasting now.

Here are my fellow inductees with my brief commentary.

1. EdTech Assorted Stuff -- Redundant. There are better education technology and progressive education blogs (odd that those two always seem to go together).

2. Board Buzz -- Not interesting. Doesn't hold my attention. The fatal flaw of many blogs run by large organizations looking to avoid controversy.

3. The Core Knowledge Blog -- Always interesting. Avoids hackery (See Edwise below). Pondiscio is a real blogger and Core Knowledge is smart enough to allow him to express his opinions.

4. Eduwonk -- Andy has good insider stuff. Sometimes too elliptic and insider to us outsiders. Doesn't blog often enough.

5. EdWize -- Often borders on hackery, no doubt due to union affiliation. Otherwise, not enough non-hack stuff to keep me interested.

6. Gotham Schools -- Good reporting on New York City stuff. Skoolboy was a good blogger at the now defunct eduwonkette. But, for some reason this blog hasn't made the cut to be a regular read. Maybe it's a signal to noise problem.

7. Joanne Jacobs -- Still going stronger than ever after all these years. The only real journalist of the lot. Has mastered the blog format.

8. Schools Matter -- The Bill Maher of edubloggers, if Maher were humorless and cut and pasted most of his material. The commentary is entirely predictable, conclusory, and never supported.

9. Susan Ohanian -- Is this even a blog? Redundant with School Matters. The opinions are identical. Pick either. Or better yet pick neither.

10. This Week in Education -- Best all around education policy blog. Russo rounds-up the news, is interesting, includes actual reporting, offers opinion, gets insider scoop, and consistently gets my first name wrong. Would be improved by jettisoning the dull Thompson.

Now that I've pulled a Michael Jordan-like acceptance speech, allow me to redeem myself by commenting on all the things I do wrong. My posts are sporadic, I take long breaks when I don't feel like writing (the problem of doing it for free and bereft of an academic sinecure), posts are far too long, lack of editing and proofreading, too lazy to spellcheck regularly, and often too abrasive.

I'm sure there are lots more which my critics should merrily point out in the comments.

November 3, 2009

A defense of SLA?

Marcie Hull, SLA's tech coordinator, commented on my posts on SLA. It's a serious comment and deserves a serious answer. (I've taken the liberty to clean up Marcie's lengthy comment which which was  typed on an iPhone.)

Numbers... Seems to me a silly tradition to measure a persons abilities. Why not watch them or give them excellent mentors, while listening to them and being sincerely interested in what they are saying.

In this case the numbers are percentile scores for the number of students who fall into each of the four proficiency categories on Pennsylvania's simplistic state assessment. So, in this case the numbers represent objective data on student performance. What's wrong with objective data? (Dick Shutz's objections notwithstanding which I acknowledge.)

The purpose of my post was to show the problem with the numbers which showed that all SLA 11th graders were proficient or above in writing. The actual student writing samples told a very different story. I also notice that you did not defend the quality of the students' writing. That is telling.

Numbers also seem to be a way to keep out students with many different learning styles, therefore keeping the old elite system in a safe place, far away from creativity.

Actually, I think that SLA's selective admission system does the bulk of the work of keeping out "students with many different learning styles" and keeping the elite system in place. Not that there's anything wrong with providing appropriate opportunities for the academically meritorious students.

Knowing the rules to break the rules is an old idea. We need more rule breakers if we want to see quick change.

If we want more rule breakers, then it follows that we want more people who know the rules in the first place according to your own argument. Did the SLA student writers know the rules of grammar, usage, and argumentation? Are they in a position to break the rules even though they haven't learned them yet? It appears that SLA isn't providing the world with more rule breakers, juts people who don't know the rules.

You are right schools need to change. We are attempting that change, 3 years is a very short time to be judged upon. Come see us in 5.

Schools don't need just change; schools need effective change. And, caring is overrated. What these kids really need is effective teaching. So, is the teaching at SLA effective? Not based on the examples of student learning that I've seen so far. We'll see in two more years.

Better yet come to the school and see what caring for a student can do for their academic success.

That's such a 20th century mindset. I made a 21st century visit. Isn't that what it's supposed to be all about?

And isn't that the way s brain works if your needs are met you are able to intellectualize? Look at Maslow, that is what I follow.

Yes, let's look at Maslow the man; the pyramid, not so much.

Sir, don't you see enough negativity in this world of education why would you pick out our community that strives for rigor and happiness?

Because Chris posted (and good for him for doing so) these examples of student work and asked for constructive criticism. Transparency is a 21st  century virtue. Although, quite honesty I don't think SLA is quite ready yet for 21st  century transparency.

What is it that you are expecting from a brand new school?

At a minimum to live up to the things it claims to be doing in its family Handbook. Is it OK to expect less?

What was your school like? Did it compare to your standards or did you make your own?

My high school was a traditional high school with all the faults and problems of traditional education. Sadly, I do not believe that SLA has improved on the failings of the traditional model. The "changes" SLA has made are superficial with respect to student learning, like many reform models that have preceded it.

I came out of one of the best high schools in the country in 1991. What scares me, even with all the money in that district not all learners were given equal opportunities to learn. They were tracked low and forgotten about and all the concentration was put on the students that could score high in math & science. Some of them are still living in their parents basements.

Agreed. This is a problem. But is the problem the high school's for not being able to deal with under-prepared students or the elementary and middle schools that under-prepared them? And, let's just limit the discussion to the mountain of kids at the margin who came to school on a regular basis and who do not possess a cognitive impairment.

So, I ask you what do these numbers mean? What do they mean to parents? What do they mean in higher education?

The numbers mean that Pennsylvania has set the bar way too low. Proficient students under Pennsylvania's standard apparently lack many skills foundational to writing ability.

Parents should be aware of this problem.

The numbers are also not reliable indicators that these students are ready for the workload inherent in most institutions of higher education. Higher education has been foreclosed to many of these students. Sadly, they don't know this yet.

Also, what kind of conversation or dialog do you want to get into with the staff at SLA? What is your purpose for reporting this? Anyone can cause a conflict.

The real conversation you need to be having is with your students, not with me. I'm just shining the light on the problem. No one from SLA or elsewhere has defended SLA on the merits yet. I have heard some excuse making. Your arguments seems to be that good intentions are good enough. I reject that opinion out of hand.

I would rather talk about solutions, kids & alternative types of instruction so that all student can feel success.

There isn't a single solution listed in your comment.

And what good is feeling success? Isn't it more important to be a success?

We are an easy target, seems to me you are upset about something and you are hiding behind this blog post instead of just writing your beef. Where's the beef?

Chris asked for comments. I provided comments. You don't seem to agree with my comments, but you aren't exactly defending the students' work either. Take a position and defend it with supporting evidence. Making vague excuses and offering unsupported opinion is not a position.


I think I finally figured out why SLA students weren't able to write a position paper?

A Tale of Two Cities

Now that Canada's largest school district, Toronto, has taken the bold step of reporting its testing data for racial and economic subgroups, we are able to compare its achievement gaps with the achievement gaps of other large cities that also report achievement data for subgroups.

Let's compare Toronto to Philadelphia and see who has the larger achievement gaps.  Let's see how Toronto with it's compassionate and generous Canadian-style social policies compares to Philadelphia with it's backward and stingy U.S.-style social policies.

Better yet, I'm not going to tell you which city is which, see if you can guess from the achievement gaps which I've given as a fraction of a standard deviation.*  Negative numbers indicate that the subgroup performed worse than whites; positive numbers indicate that the subgroup performed better than whites.

City #1
Black-White Achievement Gap: -0.85
Hispanic-White Achievement Gap: -0.73
Asian-White Achievement Gap: +0.28

City #2
Black-White Achievement Gap: -0.73
Hispanic-White Achievement Gap: -0.70
Asian-White Achievement Gap: +0.14

Which city is Philadelphia and which is Toronto?

Bonus Question:  What does this analysis suggest for improving results and policies drawn on international comparisons between diverse countries like the U.S. and more racially homogeneous countries like those in Europe and Northeast Asia.

*What I did was to convert the percentile scores reported into z-scores (which is a more accurate way of analyzing normally distributed data) for each subgroup for math and reading combined  (sixth grade for Toronto and fifth grade for Philadelphia).  Then I calculated the difference between the z-scores of Asian (East), Black, and Hispanic (Latin) subgroup and the white subgroup. A difference of 0.25 is considered to be educationally significant.  A difference of 0.75 is considered to be a large difference in the social sciences; most interventions are not capable of remedying an effect size of this magnitude.

November 2, 2009

Canada: not the educational mecca we've been led to believe

Canadian edu-pundits have been leading us to believe that Canada's lefty social policy programs have nearly eradicated both income and racial inequality and have lead to an educational mecca in which achievement gaps are no more.

The data indicates otherwise.

The Toronto District School Board commissioned a nice little study breaking down student achievement by race, parental education, and income.  Before the study you would have been hard pressed to find student achievement data broken down this way.  You'd think they were trying to hide something.

Guess what?  They were.  They were hiding giant achievement gaps.

Here's a nice little table showing student performance broken for third and sixth grades broken down by race.

Holy Moly!  Look at them gaps.  I'd like to say something snarky like "What is this Mississippi?" but that would be an insult to Mississippi.  I know Canada has a shameful history of slavery they don't like to advertise, but I didn't realize it was this bad.

Notice how (East) Asians perform above whites, who perform above Hispanics (Latin American), who perform above blacks.  What a coincidence.  That's how it plays out in the US too. Who would have thunk?  Also, notice how the gaps grow as the students go from third to sixth grade.  Another coincidence?

Now let's look at the break down by income.

First of all I find this entire table shocking.  I thought Canada was some sort of Marxist paradise, but look at that staggering income inequality.  And would you believe that the kids of the people that have higher incomes (before it gets redistributed away) have student achievement as well.  Makes me want to rethink that whole correlation vs. causation thing I always rail against.

Let's look at another shocking table. Parental education vs. student achievement.

Would you believe that kids who have parents who went to college outperform the kids whose parents only completed elementary school.  Shocking.  I thought "free" health care and school lunches solved this problem in Canada.

I simply can't wait until Stephen Downes tries to spin this data away.

The man can't keep them down

While reviewing the 2008-2209 Pennsylvania state assessments used for NCLB I noticed something that shouldn't surprise regular readers of this blog

Asians utterly dominate at the top of the performance distribution in reading and math.

There are 43 data points for racial subgroup performance for the 11th grade with ninety percent of students scoring proficient or above. This represents roughly the top 2% for school-level break downs.

28 were Asian (65.1%)
11 were white (25.6%)
2 were Hispanic (4.7%)
2 were black (4.7%)

And bear in mind that in many schools there is not sufficient Asian students to trigger NCLB's reporting requirements. Same for Hispanics.

Let's remove the subgroups that come from selective schools, such as magnet schools and other public schools that have admission requirements. This should leave us with only general admissions schools.

We are left with 28 data points.

24 were Asian (85.7%)
3 were white (10.7%)
1 was Hispanic (3.6%)
0 were black (0%)

I don't understand why the man isn't able to keep the Asians down, like he continues to do with blacks and Hispanics. Even more disturbingly, he's keeping whites down as well. But, I thought the man was white. What gives?

If we drop down to the 80% level we have 138 data points. This represents about the top 6% of performance. Here's what we have.

64 were white (46.4%)
59 were Asian (42.8%)
4 were Hispanic (2.9%)
11 were black (8.0%)

By my count there were at least 25 schools in which a white subgroup was reported, but no Asian subgroup. In most, if not all, of those schools Asians would have at least equaled the performance of whites. Hispanics suffer from the same effect due to their low numbers, as do blacks to a lesser extent. Taking all this into account, it's easy to see that Asians once again dominate.

This little exercise is hardly scientific, but it does show how ridiculous the whole racial discrimination as an excuse for poor black and Hispanic subgroup performance really is.

School of the Future Crashes and Burns

As I predicted three year's ago, Philadelphia's ultra-expensive, ultra-high-tech, and ulta-ironically-named School of the Future (SOF) would crash and burn.  Educators and edu-journalists believed otherwise.  they thought that SOF would revolutionize education by teaching 21st century skills.

What could possibly go wrong?  There was lots of technology.  Great facilities.  Community involvement.  Lots of money being thrown around.  There would be discovery learning and lots of inquiry.  In short there was a naïve over-reliance on all the accoutrement's of education that are irrelevant to student outcomes.  In fact, some of them are downright toxic.

But things have gone horribly wrong.  Take a look at last Spring's PSSA scores (Pennsylvania's easy state assessment) for 11th grade students at the School of the Future.

Percent scoring proficient or above in Math (state average)

All students: 7.5% (55.6%)
Black Students: 6.8% (28.3%)
Poor Students: 7.8% (35.3%)
Special Ed Students: 0% (14.6%)

Percent scoring proficient or above in Reading (state average)

All students: 23.4% (65.2%)
Black Students: 24.5% (38.5%)
Poor Students: 23.5% (44.4%)
Special Ed Students: 0% (20.2%)

What is even more horrifying is the percentage of students performing at the below basic level.  Bear in mind that getting to Basic level in Pennsylvania requires performing only slightly better than chance.


All students: 74.1% (24.9%)
Black Students: 73.8% (50.2%)
Poor Students: 74.5% (42.1%)
Special Ed Students: 100% (69.5%)


All students: 49.5% (18.8%)
Black Students: 48.0% (39.6%)
Poor Students: 58.8% (34.3%)
Special Ed Students: 91.7% (61.5%)

95% of the students at the School of the Future are Black and 47.2% get free or reduced lunches, a proxy for poverty.  The school is a general admission school that was paid for, staffed, and operated by the Philadelphia public school system.  The project organizers aimed to create a model that could be replicated easily in other districts.

So what went wrong, besides the obvious?

Pretty much everything according to this June article in eSchool News.

1.  The curriculum planning committee was staffed by naive fools.

"We naively thought, I guess, that by providing a beautiful building and great resources, these things would automatically yield change. They didn't," said Jan Biros, associate vice president for instructional technology support and campus outreach at Drexel University and a former member of the SOF Curriculum Planning Committee.

2.  The school got the equivalent of Microsoft Bob, instead of Windows 7

Microsoft made it clear at the SOF's inception that it would not be overseeing the school's operation; instead, it would lend its initial expertise, provide basic professional development, and then leave the success of the school up to its leaders.

Although the technology itself was not supposed to trump basic classroom practices, Microsoft and the school's planners had decided not to allow the use of textbooks or printed materials; instead, all resources were located online through a portal designed by Microsoft.

Yet educators frequently encountered problems accessing the internet, because the school's wireless connection often would not work.

"This vital part of the school's technology was never stable and robust enough to make it dependable," said Biros. "There was no safety net, and it seemed like a great leap of faith--faith that these teachers, amidst so many new circumstances, would be able to develop curriculum almost on the fly and store and distribute it electronically."

3.  The Philadelphia public school system doesn't know how to run a modern IT department.
The district's IT staff had responsibility for the network, but according to Biros, there was not an IT employee on site, and when problems occurred they were not fixed promptly. There also was no dedicated technical support.
"I don't think the district was ready to handle the development of Microsoft's technology and portal. The district is also Mac-based and not PC-based, which caused a lot of technical issues ...," said Patrick McGuinn, assistant professor of political science at Drew University.
4.  Just handing out laptops turns out not to be an educational panacea.
Another problem was that the students--most of whom came from poorer families and neighborhoods--could not use or maintain their laptops properly. Students were either afraid to take their laptops home for fear of theft, or they didn't know how to access all the programs on the machines.
5.  The realities of project based education bit them in the ass.
"The lack of standardized grades made it hard to relate student progress to parents.... There is no clear definition of what project-based learning exactly is and how that can be step-by-step implemented in the classroom. Student remediation also didn't fit with the project-based collaboration model."
At one point during the discussion, an audience member asked: "All of your resources are online, and educators have to access [them] through this portal. However, your educators don't know how to work the technology. So, exactly what did the teachers teach in class? What were the students learning?"

"Well, honestly, I'm not exactly sure," replied Biros.

In the absence of real leadership, and because no community partnerships had formed, the SOF started to adopt more traditional district assessments and classroom practices. [Ed. -- isn't this always how it turns out]
6.  Students didn't like going to the school.
"Truancy picked up, and we were not prepared to handle it."

Perhaps an increase in truancy wouldn't have been such a large problem, except many of the educators hired were not well-versed in dealing with at-risk students who were required to participate in project-based learning.

7.  The teacher's union proved to be a menace.
Although Microsoft and the SOF based hiring decisions on Microsoft's Education Competency Wheel, which, according to the company, is "a set of guideposts for achieving educational excellence that centers on identifying and nurturing the right talents in a district's employees, partners, and learners," the SOF had to go through the Philadelphia's Teachers' Union to hire its educators.

The process, said Biros, "was intended to facilitate hiring the best faculty possible with objective consideration; [but] the reality of the union constraints within the district effectively eliminated that outcome. Because of the district's human resources policies and union regulations, most of the applications received were from current district teachers looking for new assignments. We were not recruiting from a pool of any and all teachers interested in applying to SOF."

In short pretty much everything went wrong and everyone blamed everyone but themselves for the problems.

But, the real problem is that you can bet that no one will learn from the failure of SOF.  You can bet that all those education technology bloggers won't address the problems of SOF that reveal to gaping holes in their vision of the wonders of education technology.  You can bet that all the poverty racers won't confront the failure of a mass infusion of money on student outcomes.  Will the union apologists address the real problems caused at SOF by their beloved unions?  And what about the progressive educators whose project-based curricula never live up to expectations in urban schools?

SOF is a microcosm of every dopey education reform that has come down the pike.  Oversell the expectations; ignore the predictable outcomes.

Here's how Philadelphia's Mayor Street sold SOF:

"You won't be able to say, 'I didn't have the computers. I didn't have the technology. I didn't have the teachers. I didn't have mentors,' because the young people who go to this school will be in the premier educational environment in the entire country, maybe even in the entire world," Street said. "So the bar for you is raised."
Now you know the outcome.  Half the students are performing at the below basic level in reading, three-quarters are doing the same in math.

November 1, 2009

Some Perspective

Two posts ago we looked at the writing of some students (mostly seniors) at Philadelphia's Science Leadership Academy magnet school.  I was none too impressed to put it mildly.  And no one has stepped up so far to defend the actual writing ability of the students.  Time will tell.

Now pretend you are setting proficiency standards.  Where would these students fall out on an advanced, proficient, basic, and below basic scale for this assignment (the subject matter of which admittedly was overly-difficult, although, the task of writing a position paper was not.)

Have you determined your standards yet and the approximate percentage of students falling within each category?  Don't read on until you have.

Now where would you predict the commonwealth of Pennsylvania would set their standards? What percentage of students would be advanced?  What percentage would be proficient?  Basic?  Below basic.

Last April, 11th graders took Pennsylvania's writing exam.  Here is how the SLA students fared:

Advanced:  28.4%
Proficient: 71.6%
Basic: 0%
Below Basic: 0%

The 2009 PSSA Writing School Level Proficiency Results, p. 266.

Your eyesight isn't failing you.  100% of SLA 11th graders were proficient or above in Pennsylvania's writing assessment.

Bear in mind that Pennsylvania falls out somewhere in the middle of the states as far as where the PSSA exam falls out relative to the NAEP in Reading and Math.  See Figures 2 and 3.  Sadly, 11th graders don't take the NAEP writing exam.  Only 8th graders do.  36% of 8th graders were proficient on the NAEP writing assessment.

Sherman Dorn is right to point out that the NAEP cut scores have been arbitrarily set.   However, whenever I look at actual test NAEP questions or how students perform on state tests which relative to NAEP are low, it's hard to come to a conclusion that arbitrary necessarily means too high.  If anyuthing it's just the opposite.

Update:  Some more perspective.  Here's the breakdown of SLA's 11th graders based on the writing test statistics: 36.2% are white, 49.1% are black, 15.3% are Asian or Hispanic, and 30.1% are economically disadvantaged.  Also, students need to apply and be accepted to SLA.  Here is the admission criteria:

Criteria: Admission to SLA is based on a combination of a student interview at the school with a presentation of completed work, strong PSSA scores, As and Bs with the possible exception of one C, teacher or counselor recommendation and good attendance and punctuality. Interested families must contact the school to set up an appointment for an interview. SLA will not initiate the interview process with families.
 Just in case you thought SLA was teeming with abjectly-impoverished inner city kids with abusive parents and poor language skills who really don't want to be in school anyway.

October 30, 2009

How to teach a student to write a point that is supported by evidence

The following is one way to teach a student how to write one form of a simple paragraph in which the student agrees with one of a pair of arguments presented to him in a fact pattern based on a comparison of the arguments with a source of evidence.  The lesson comes from SRA's Reasoning and Writing, Level D (Lesson 82, exercise 1).  The lesson is suitable for a student who can copy words at a rate of 15 words per minute and  possesses basic paragraph writing skills as determined by a placement test. 

The student reads the following passage:
Fran Dent wanted a raise.  She told her boss why she deserved a raise.  She told the boss that she worked hard, that she worked fast and accurately, and that she was always on time.

Her boss said that he didn't think she should get more money because she wasn't always on time.  He said, "You're late most of the time."

She said, "I'm never late to work."

Teacher: The place where Fran worked had a time clock that showed the time everybody came in each morning. Fran and her boss decided to get the records to show how often Fran was late.

(Below the box is the student's prompt.)


Fran indicated that she was never late.  The boss said that she was late most of the time.  You'll write a paragraph that tells whose claim agrees with the time clock record.  The equal box prompt show what your paragraph will say.  You'll start by saying what the person who was right indicated.  Then you'll tell that the time clock record supports that person's claim.  Then you'll give enough facts to make it clear that one of the persons is right and the other is wrong.

The time clock record shows some of the employees.  The first column shows their names.  The next column shows the time they are supposed to be at work.  They're all supposed to be at work at 8 a.m.  If they come later than 8 a.m., they're late.  The next column shows the number of days they were absent during the whole year.  The last column shows the latest time they arrived during the whole year.

Some of the information in the table is relevant to the disagreement between Fran and her boss.   Look over the table carefully.

Write your paragraph.  Tell when she's supposed to arrive.  Then give any fact that's relevant.

Here's what the student (a nine year old fourth grade student) actually wrote:

      Fran indicated that she was never late for work.  The time clock record supports this claim.  The time clock record indicates that Fran's latest time for work was 7:56 a.m., four minutes before work.

The student would receive feedback to correct the vague wording of the last sentence.  Then a good example paragraph would be read to the student:

     Fran indicated that she was never late for work. The time clock record supports this claim. The record indicates that she was supposed to get to work by 8 a.m. and she was never later than 7:56 a.m.

In subsequent lessons the prompts would be faded until the student could construct a suitable paragraph for a similar type of problem.

Position Papers from SLA

A position paper is a classic, basic form of argument that every high school student should know how to write well--even the 21st century variety.  Here's how a tech savvy 21st Century student might learn how to write a position paper.

It's not rocket science, but it does take lots of practice to do well.  Sadly, many high school students never learn how to write basic position papers well, if at all.

Chris Lehmann, principal of Science leadership Academy (SLA) of Philadelphia, has asked his students, mostly seniors, in his Modern Educational Theory class to draft a position paper.  Chris has posted the assignment on his blog, Practical Theory, and wants you to take look at the student's position papers and to comment.

We've visited SLA before.  SLA is a magnet high school which proudly sets itself out as an "inquiry-driven, project-based 21st Century school with a 1:1 laptop program." Last time we looked at one example of a student's writing assignment, probably one of the better examples since it was picked for the Family Handbook.  This time we have the writings of an entire class of (mostly) seniors.

Let me state at the outset, I'm sure Chris and SLA mean well and care about their students.

Here's the assignment.

We, at this point, looked at several different views of education, from Deborah Meier's vision of democratic education, to Robert Pirsig's "Church of Reason," to Diane Ravitch and E. D. Hirsch's views of core knowledge, to Nel Nodding's ethic of care, to President Obama's speech on the first day of school.

Now, it is your time to take your stand.

You are to write a two page position paper creating your vision of what school should be.

Your paper should consider the following points:

  • Clearly define your vision of school:
    • What is its purpose?
    • Why is it good for the individual?
    • Why is it good for socie[t]y?
    • What does your vision of school value? Prioritize?
  • Given this vision of school -- what differences would you see in the structure of school when compared to a "traditional" school?
Readers of this blog should be sufficiently familiar with the differing views on education to be able to evaluate the students' work on the merits and to determine if well supported positions have been taken on their views of what school should be.

Here is Chris' take on the student's position papers:

I'm really thrilled with much of the thoughtfulness that the kids display in the essays. It is, obviously, clear that the kids have been at SLA for years, but I don't think that's their only vision of what school can be -- which is important to me. The kids have their own thoughts, and I'm really interested to see how these visions continue to evolve.

I'm not sure I understand the purpose of this assignment.  It is coming at the beginning of the course before the students should have learned much about modern educational theory.  Is the important thing to actually learn and understand modern educational theory or how to write a position paper?  I'm going to assume the object was to accomplish both.

According to my view of education and learning, I would not expect most students to have acquired a deep understanding of modern education theory after just a few weeks of exposure.  I would expect only a superficial understanding that is closely tied to the examples (i.e., the specific pundits' opinions) the students were exposed to.  And, that is exactly what we see in the students' work.  This isn't meant to be a criticism of the students' work.

I made this same observation in the last SLA assignment that I reviewed.  Then, my criticism was directed at SLA because SLA was overselling (and continues to oversell) these projects  that supposedly "can only be completed by showing both the skills and knowledge that are deemed to be critical to master the subject and demonstrate that deep level of understanding." (2009 Family Handbook, p. 4)  And, the primary assessment of student knowledge continues to be these projects:

At SLA, there may be multiple assessments – including quizzes and tests – along the way, but the primary assessment of student learning is through their projects. Id.

Last time I got pushback from Chris and Tom Hoffman.  Both of their arguments basically attempted to redefine deep understanding downwardly to mean the ability to express an opinion. No doubt they'll try the same gambit again.  Chris thinks the papers were thoughtful and that the kids had their own thoughts.  That's not exactly a challenging standard.  But, we don't need to go there this time because I would not expect most students to have a deep understanding of the subject matter yet.  Time will tell if this situation improves by June.

So, let's turn to the position paper part of the assignment.  A well written position paper at the high school level should follow the traditional format of introduction, body, and conclusion.  At a minimum, the body should contain an explanation of why the position has been taken and should contain supporting evidence for the position.  A good position paper will have a thesis and a concluding summary of the main points.  The body would include the counter arguments and their rebuttals. 

Chris' prompt seems at odds with the standard definition of a position paper.  Chris apparently is just looking for the students' opinions (or vision) of what school should be, provided that those opinions state a purpose, the benefits to the student and to society, the values and priorities, and the difference with respect to the structure of traditional schools.  And, that's largely what Chris got -- mostly opinion.  As far as support for the opinion, most students provided more of their opinion and occasionally a tie-in to one of the pundit's opinions.  Most of the essays go off point, some stray far off point.  All the essays could use a good editor, at least one rewrite, and should be tighten-ed up considerably.

Chris calls these essays a first draft.  A first draft of the students' opinions maybe, but more like a zeroth draft of a properly written academic standard position paper.  I call Chris a brave man because publishing these very raw essays on the internet and then calling attention to them in your blog takes quite a bit of professional bravery since these essays are a reflection on SLA's teaching ability.

I am assuming that no teacher has reviewed and  made editorial comments on these essays prior to their being publishing.  The essays are full of language usage problems, grammatical mistakes, informalities, and colloquialisms.  Does SLA really want the world to see the essays in this form? 

Apparently so.

I must be missing something.  Most of the students have formed an opinion that school should be just like SLA; but, their very own essays demonstrate that school should not be just like SLA if basic writing skills are one of the goals.

This is not an indictment of the kids or their abilities.  Clearly, these kids want to learn.  They have stuck it out this long, overcoming whatever adversity was in their way.  No, it's an indictment of their schooling, only a part of which SLA is responsible for.  If these kids are college bound, remediation is in their future.

But, what I really don't understand is that based on the demonstrated abilities of these students why are they wasting their time learning Modern Educational Theory when they should be learning basic writing and language skills?  They're already getting a painful lesson of the pitfalls of some of elements of Modern Educational Theory the hard way (ironically enough, the ones they largely favor), they just won't realize it until next year.

October 28, 2009

Murray on Curriculum

[I]t’s time for me to get in touch with my inner optimist. We can’t make our kids much smarter than they are naturally, but we can do a hugely better job of teaching them stuff. If you get away from the worst schools in the big cities, I think the central problem with the public schools is not poor teachers, but the curriculum teachers are given to teach, especially in elementary and middle school.

- Charles Murray

Murray goes on to tout the Core Knowledge sequence, as he did in his last book, Real Education.

I'd say the second biggest obstacle is teacher preparation and training.  Teachers largely do not have the necessary skills to effectively teach the most effective curricula.  They have difficulty teaching from a scripted curriculum in the absence of a considerable amount of training.  Needless to say, they aren't getting this training in Ed school.

Some Clarity on NCLB

Below is a lengthy quote of Judge Sutton from the recently decided School District of the City of Pontiac, et al. v. Secretary of the United States Dep’t of Educ (back-story). The opinions in the case, which deadlocked, are long and mostly concern boring legal issues of statutory construction and other procedural issues which no one, apart from lawyers, care much about.  The excerpted quote, however, is a clearly written analysis of NCLB and the basic bargain it made with the states:  federal funds for achieveing progress along with substantial flexibility for achiving and defining that progress.  Read the whole thing, it is worth your time. (The quote starts at about page 48.  I've redacted some non-relevant sections and bolded the important bits.):

[T]he No Child Left Behind Act clearly requires the States (and school districts) to comply with its requirements, whether doing so requires the expenditure of state and local funds or not. A contrary interpretation is implausible and fails to account for, and effectively eviscerates, numerous components of the Act.

The basic bargain underlying the Act works like this. On the federal side, Congress offers to allocate substantial funds to the States on an annual basis—nearly $14 billion in 2008 for Title I, Part A, a 60% increase in relevant federal funding since 2001—exercising relatively little oversight over how the funds are spent. On the State side, the States agree to test all of their students on a variety of subjects and to hold themselves and their schools responsible for making adequate yearly progress in the test scores of all students. In broad brush strokes, the Act thus allocates substantial federal funds to the States and school districts and gives them substantial flexibility in deciding how and where to spend the money on various educational “inputs,” but in return the schools must achieve progress in meeting certain educational “outputs” as measured by the Act’s testing benchmarks. As the Supreme Court recently explained:

NCLB marked a dramatic shift in federal educational policy. It reflects Congress’ judgment that the best way to raise the level of education nationwide is by granting state and local officials flexibility to develop and implement educational programs that address local needs, while holding them accountable for the results. NCLB implements this approach by requiring States receiving federal funds to define performance standards and to make regular assessments of progress toward the attainment of those standards. 20 U.S.C. § 6311(b)(2). NCLB conditions the continued receipt of funds on demonstrations of  “adequate yearly progress.” Ibid.

Horne v. Flores, __ U. S. __, 129 S. Ct. 2579, 2601 (2009). The school districts’ position—that they can accept the federal dollars, spend them largely as they wish, yet exempt themselves from the Act’s requirements if compliance would require any local money—undoes this bargain by nullifying some provisions of the Act and undermining several others.
Accountability. Accountability is the centerpiece of the Act, and a plausible interpretation of the legislation cannot ignore that reality. Instead of focusing on how much money school districts spend on each child or “dictating funding levels,” the Act “focuses on the demonstrated progress of students through accountability reforms.” Id. at 2603. The Act begins with a “Statement of Purpose” that drives home Congress’s interest in establishing accountable public schools: “ensuring . . . high-quality academic assessments [and] accountability systems”; “holding schools, local education agencies, and States accountable for improving the academic achievement of all students”; “improving and strengthening accountability”; and “providing . . . greater responsibility for student performance.” 20 U.S.C. §§ 6301(1), (4), (6), (7).

Flexibility. The school districts’ interpretation is inconsistent not only with the Act’s accountability requirements but also with the flexibility the Act gives States and school districts in return for increased responsibility for student achievement. As the Act’s Statement of Purpose makes clear, that is the central tradeoff of the Act: “providing greater decisionmaking authority and flexibility to schools and teachers in exchange for greater responsibility for student performance.” id. § 6301(7) (emphasis added); see also Horne, 129 S. Ct. at 2601 (the Act “reflects Congress’ judgment that the best way to raise the level of education nationwide is by granting state and local officials flexibility to develop and implement educational programs that address local needs, while holding them accountable for the results”). Unlike most spending programs, this one comes with few strings telling the States how they should comply with its conditions. Under the Act, States develop their own curricula and standards, 20 U.S.C. § 6311(b)(1), their own tests to assess whether students are meeting those standards, id. § 6311(b)(3), and their own definitions of progress under those standards, id. § 6311(b)(2)(B), so long as the progress culminates in near-universal proficiency by 2014, id. § 6311(b)(2)(F).

This flexibility extends to spending as well. As the school districts rightly acknowledge, the Act “provide[s] school districts with unprecedented new flexibility in their allocation of Title I funds.” Final Reply Br. of Pontiac Sch. Dist. at 3 (internal quotation marks omitted). Some federal funds, to be sure, must be spent in certain ways. See, e.g., 20 U.S.C. § 6303 (reserving some Title I, Part A funds for school improvement); id. § 6317(c)(1) (same); id. § 6318(a)(3)(A) (reserving some funds for parental involvement programs); id. § 6319(1) (reserving some funds for professional development). And the Act strictly confines the use of Title I funds to geographic areas with heavy concentrations of low-income students. See id. § 6313(a). But within these areas and with respect to these priority students, the Act gives States and school districts substantial flexibility in choosing how to spend the money. For instance: Section 6314 gives school districts wide discretion to consolidate funds from various sources and to focus them on certain schools in whatever ways will improve student performance there; § 6313(b) gives school districts discretion to transfer funds between schools within certain guidelines; and § 7305b allows States and school districts to transfer up to 50% of the funds allotted to other education programs to supplement their funds under Title I, Part A.

The substantial flexibility the Act gives recipients over federal funds is surpassed by the near-complete flexibility they retain over their own funds. The only limitation is that participating States cannot reduce their own spending and offset it with federal funding but must use the Act’s federal dollars to supplement, not supplant, their own. 20 U.S.C. §§ 6321, 7901. Beyond that basic requirement—a prohibition on fiscal cheating, really—the States can use their dollars however they see fit, whether for teachers or for computers or for facilities or for whatever else they think will help their students the most.

The express and unprecedented flexibility the Act gives to the States in prioritizing the spending of federal dollars—especially in Title I, Part A—cannot coexist with an interpretation of the statute that allows school districts to exempt themselves from the accountability side of the bargain whenever their spending choices do not generate the requisite achievement. Were the school districts correct, a State could use this flexibility to focus its federal and local resources almost exclusively on improving, say, teacher quality—a legitimate goal no doubt, but one that would allow the State to sidestep the Act’s mandatory assessment requirements by contending that it lacked the funds to administer them or to make progress under them. Sch. Dist. of City of Pontiac v. Sec’y of United States Dep’t of Educ., 512 F.3d 252, 284 (6th Cir. 2008) (McKeague, J. dissenting). That is not what Congress had in mind. It gave the States a clear and consequential choice: between taking the bitter (accountability) with the sweet (unprecedented flexibility in spending federal and state dollars) or leaving the money on the table.

Costs of Compliance. Not surprisingly, in view of the expansive flexibility that the Act gives States in spending federal and local funds, the Act says nothing about the bill of particulars at the heart of the school districts’ complaint: the costs of complying with the Act’s requirements. How could it be otherwise? The Acts’ spending flexibility necessarily makes it impossible to calculate or even define the costs of complying with the Act’s requirements.

The primary formula for allocating Title I, Part A grant money does not say a word about costs of compliance. See 20 U.S.C. §§ 6313(c), 6333(a), 6334(a), 6335(a)–(c), 6337. While the Act asks States to submit plans to the Secretary, id. § 6311, and asks school districts to submit plans to the States, id. § 6312, it does not require either entity to estimate the cost of compliance. Nor, in fulfilling their various reporting responsibilities under the Act, must the States or school districts estimate the costs of compliance. See, e.g., id. §§ 6311(h), 6316(a)(1)(C). If, as the Supreme Court recently explained, the Act “expressly refrains from dictating funding levels,” Horne, 129 S. Ct. at 2603, why would Congress exempt failing school districts from the accountability requirements based on inadequate “funding levels”? The school districts have no answer.

But even if Congress wished to make costs of compliance a legitimate excuse for, say, inadequate yearly progress, how would it do so? Once Congress decided to measure accountability by educational outputs (gauged by tests scores), as opposed to educational inputs (gauged by dollars), it made objective measurements of compliance costs virtually impossible. Any effort to measure these costs surely would vary from school to school, if not from student to student, and they surely would vary from year to year. The phrase “costs of compliance” has no discernible meaning in this context, as the Act leaves it to the States, no matter how little or how much funding Congress provides, to make discretionary cost choices about how to make meaningful achievement-related progress.

Take a cost estimate for adding an extra hour to the school day, for lengthening the school year or for hiring more math or reading teachers—all plausible ways to improve a school’s achievement scores. Each innovation has an estimable cost, to be sure. But that does not establish that the estimate would lead to the requisite progress. And if it did not, then what? Perhaps extending the school day by one more hour, extending the school year by one more week or hiring one more math or reading teacher would do the trick. But maybe not. What works for one school district might not work for another. What, indeed, works for one classroom might not work for the classroom next door, given the correlation between great teachers and great teaching—and the occasional operation of that principle in reverse. Even more discrete costs like developing and administering tests cannot be accounted for in advance given the considerable flexibility States have under the Act in implementing those requirements. Within certain general limits, a State may develop whatever curricular standards and tests it wants. 20 U.S.C. § 6311(b). The State may use pre-existing standards that meet the Act’s requirements, id. § 6311(b)(1)(F), or it may create new ones.

In their complaint, to use one example, the school districts say that Brandon Town School District “estimates that . . . it needed to spend $390,000 more than it received in NCLB Title I funding to ensure that the school makes [adequate yearly progress].” Compl. ¶ 65. The school district may be right, and we have no license to say that it is not at this Rule 12(b)(6) stage of the case. The issue, however, is not whether the school districts can fairly say that compliance with “adequate yearly progress” requires more federal dollars than the Secretary has allocated to them. It is whether a State could tenably think that the Act excuses non-compliance whenever a school district maintains that it has insufficient resources to make the required progress. Surely every school district could do more with more money. And if that is the case, every failing school district could do more with more federal money—and maybe enough to make adequate yearly progress. It is hard to imagine when—or, for that matter, why—a failing school would ever concede that it was getting sufficient federal funds to make such progress.

“Reflecting a growing consensus in education research that increased funding alone does not improve student achievement,” the Act moves from a dollars-and-cents approach to education policy to a results-based approach that allows local schools to use substantial additional federal dollars as they see fit in tackling local educational challenges in return for meeting improvement benchmarks. Horne, 129 S. Ct. at 2603 & n.17. The Act, in short, rejects a money-over-all approach to education policy, making it implausible that the heartland accountability measures of the law could be excused whenever schools, exercising their flexibility over how to spend federal and local dollars, decided they cost too much.

* * * * *

Depending on whom you ask, the No Left Child Behind Act might be described in many ways: bold, ground-breaking, noble, naïve, oppressive, all of the above and more. But one thing it is not is ambiguous, at least when it comes to the central tradeoff presented to the States: accepting flexibility to spend significant federal funds in return for (largely) unforgiving responsibility to make progress in using them. The theme appears in one way or another in virtually every one of the Statements of Purpose of the Act, and it comes across loud and clear in the remaining 674 pages of legislation.

That said, I have considerable sympathy for the school districts, many of whom may well be unable to satisfy the Act’s requirements in the absence of more funding and thus may face the risk of receiving still less funding in the future. Yet two Presidents of different parties have embraced the objectives of the Act and committed themselves to making it work. So have a remarkably diverse group of legislators. If adjustments should be made, there is good reason to think they will be. But, for now, it is hard to say that the judiciary will advance matters by taking the teeth out of the hallmark features of the Act. It is the political branches, not the judiciary, that must make any changes, because the Act’s requirements are clear, making them enforceable upon participating States and their school districts.

October 25, 2009

Today's Quote

Why the belief that SES is causal is so deep and wide is perplexing and astounding. The only explanation I can come up with is that it lets publishers, professors and other "authorities", who ARE causally responsible, off the hook.
- Dick Schutz

A Good Example of Why Education Professors Shouldn't Blog

Education Optimist, Sara Goldrick-Rab, must not get embarrassed very easily judging by this post criticizing a mostly dopey Kristof column on education and poverty:

Social science researchers across the nation are scratching their heads. Where in the world did Kristof get this one? For decades, solid analyses have demonstrated that while aspects of schooling can be important in improving student outcomes and alleviating the effects of poverty, the effects of factors schools cannot and do not control are much greater (for a place to start, read Doug Downey's work). Kristof emphasizes teachers and improving teacher quality by taking on the teachers' unions because he reads the data to mean that "research has underscored that what matters most in education - more than class size or spending or anything - is access to good teachers." Simply put, wrong. Access to good teachers is the most important factor affecting student achievement that is under schools' control (or as many put it, the most important school-level factor). What matters most in educational outcomes is the poverty felt by students' families. And to my knowledge, no study has ever rigorously compared the effectiveness of interventions based on cash transfers, housing subsidies, and teacher quality improvement-- what's needed to reach the kind of conclusion with which Kristof drives his argument. At the same time, a simple glance at the relative effects of programs like Moving to Opportunity, New Hope, etc which target poverty itself rather than how adults interact with children from poverty (the aim of improving teacher quality), should convince anyone than his target is misplaced.
(emphasis added)

That is one densely packed paragraph of bad reasoning.  One more poorly reasoned sentence and it might have collapsed upon itself into a educational black hole, if you will.  Fortunately the following hasty call for censorship, another Goldrick-Rab staple, was placed in the next paragraph.
Experts who think daily (Ed -- how about the ones that only think fortnightly?) about how to end poverty could, and undoubtedly will, inform the next steps taken by Democrats. Dems should listen to them, and not to Kristof.
First of all, there are few if any solid analyses of the effects of poverty on educational outcomes.  I don't think there is any scientifically sound research upon which anyone could draw the causal conclusion that alleviating poverty has educationally significant effects on student outcomes.  But why don't we take a quick glance at the two pseudo-research studies Goldrick-Rab cites:
The Moving to Opportunity (MTO) for Fair Housing Demonstration Program interim report found:
MTO had no detectable effects on the math and reading achievement of children.

OK.  Let's move on to the New Hope Project:

One of the most striking findings from the earlier evaluation reports was New Hope’s positive effects on children’s academic achievement at the two-year mark, in the form of increased teacher-rated academic skills, and at the five-year mark, in the form of higher standardized reading test scores (these tests were not administered at Year 2) and higher parent-reported grades in reading. However, these effects did not persist to Year 8, at least for the full child sample, although there were continued small effects on reading test performance for boys. No effects on math test performance were found. Overall, there was a tendency for impacts to be greater for boys than for girls. (emphasis added)
(See p. 29 and Table 5)

Do you remember the scene in the movie My Cousin Vinny where Vinny's girlfriend is looking at all the bad pictures his girlfriend has taken to help him win the lawsuit against his cousin and his friend that he's trying to win? 

Okay, you're helping. We'll use your pictures. Ah! These *are* gonna be - you know, I'm sorry, these are going to be a help. I should have looked at these pictures before. I like this, uh, this is our first hotel room, right? That'll intimidate Trotter. Here's one of me from behind. And I didn't think I could feel worse than I did a couple of seconds ago. Thank you. Ah, here's a good one of the tire marks. Could we get any farther away? Where'd you shoot this, from up in a tree? What's this over here? It's dog shit. Dog shit! That's great! Dog shit, what a clue! Why didn't I think of that? Here's one of me reading. Terrific. I should've asked you along time ago for these pictures. Holy shit, you got it, honey! You did it! The case cracker, me in the shower! Ha ha! I love this! That's it!

The MTO and New Hope studies are the case crackers of poverty interventions on education outcomes.

Both studies show what most studies typically show for poverty interventions on education outcomes:  small or undetectable effects that tend not to persist past adolescence.

And, here we have Goldrick-Rab citing them as conclusive proof of just the opposite.  Simply amazing. The woman has no shame.  Where's Bracey's rotten apple award when you need it most?

Good Night, Sweet Hack

Education Gadlfy, Grald Bracey, died this week.  He had a sharp analytic ability.  It is sad that he only applied it to refute some of the shoddy evidence, like international testing comparisons, against the things he believed in.  For all the other shoddy evidence supporting the things he believed in, like poverty/school outcome research and whole language research, he consistently failed to use his sharp analytic abilities.  This curious dichotomy and his penchant for cherry picking evidence permitted him to be an apologist for America's public school system.  It takes quite a bit of cognitive dissonance to be one of those and Jerry was a full-throated one.  He deserved but never received one of his own bad apple awards.

October 22, 2009

Actually, That's Not What the "Research" Shows Either

In her third post of her new blog, The Educated Reporter, Linda Perlstein makes an (all-too-common) error that reporters should not be making, especially reporters claiming to be educated.

Perlstein starts off well by criticizing the inane trope President Obama repeated about "teacher effectiveness" in a recent speech.

In his education speech to the Hispanic Chamber of Commerce in March, President Obama said, “From the moment students enter a school, the most important factor in their success is not the color of their skin or the income of their parents. It’s the person standing at the front of the classroom.”

To put it bluntly: “He’s wrong.”

Indeed.  First of all, the research on this issue isn't really research in the conventional sense that there are properly conducted controlled studies on point.  There aren't.  The "research" is merely correlational studies which does not rise to the level of real scientific research.  At best, these studies might suggest profitable future avenues to pursue for conducting real scientific research.

Furthermore, these correlational studies are on teacher effectiveness, not "the person standing at the front of the classroom."  There are a lot of variables tied up in the term "teacher effectiveness": the teacher, the pedagogy, the curriculum, the classroom environment, and the like.  Only some of these variables are under the control of the person standing in front of the classroom, that is, the teacher. Moreover, the correlational studies are incapable of teasing out which variable(s) is responsible for correlation with student outcomes, in any event.

Nonetheless, this trope gets trotted out all the time in the dopier quarters of the edusphere.  And, Perslstein is right to jump on it.

Perlstein, however, steps in it when she tries to state what the research actually shows:

Of the various factors inside school, teacher quality has had more effect on student scores than any other that has been measured. (emphasis in original)

To put it bluntly: “She’s wrong.”

First, to the extent that the studies are correlational in nature, they are incapable of showing that a variable, in this case teacher effectiveness, had "more of an effect" on anything, including student scores. The studies only show that "teacher effectiveness" (however the study attempted to define the variable) is correlated with student scores by some small amount.  Correlation is not causation.

Second, "teacher effectiveness" is not the most effective factor inside school.  See here and here.  I don't know how this particular trope got started, but it is amazing how often it gets uncritically trotted out by education reporters and bloggers.  Look at Perlstein's conclusion:

But for now, just remember: When you read that teachers are the most important school factor, you can’t drop the “school” and pass it on.

Regardless of whether you drop the "school" caveat or not, you should not be passing it on-- because it's not accurate.

Any educated reporter commenting on education should know this.

Welcome to the edusphere, Linda.

October 14, 2009


Your spouse asks you "Will you wash the dishes tonight"?

You respond, "Sure"

Knowing you're a slacker, your spouse asks, "Promise"?

You respond, "Yes."

Is your affirmative response an example of insurance or its equivalent?  Why or why not?

You certainly need to know something to determine an answer and provide a correct explanation.  I don't care how finely honed your reasoning skills are, you aren't reasoning your way to an answer unless you understand that something. (Although I'm happy to entertain a counterexample, Stephen.) That something is content knowledge.

One thing you need is receptive language to understand all the words I used in the example.  You also need enough expressive language to articulate an answer.  That language makes up part of the content knowledge needed for a successful answer.  But let's take language out of the equation and assume that humans have evolved direct mind reading abilities and are now able to directly access the thought processes of others.  We are now free of the burdens of language.  Language will therefore form no further part of this example. (Though, sadly, in the real world I must continue to use language to explain things to you, hopefully you won't be confused.)

Your knowledge of "insurance" (the concept, not the word) probably comes from your experiences and observations with examples (and possibly non-examples) of insurance--auto insurance, life insurance, health insurance, home and property insurance, disability insurance, insurance in the game of blackjack, an "insurance" run in baseball, and the like.  Those are all imperfect examples of insurance, but you may have been able to tease out the defining features of insurance.  Then you could use those defining features to determine if the above example is an example of insurance.

Your thought process is basically pattern recognition.  Do you recognize the common pattern inherent in all the examples of insurance you've experienced or observed in your life?  Does this new form of insurance fit the pattern?

This is a difficult problem because the defining features of insurance, assuming you can even identify them, are rather nebulous and complicated themselves.  In fact, the defining features of insurance are all higher-order concepts, just like insurance is.  Let me give you a hint:  Insurance has four main defining features.  Now, you have four patterns to wrestle with.  Yikes.

The four defining features of insurance are concept knowledge.  Notice how I haven't used the word "definition" so far.  Knowing the definition of insurance or its four defining features, isn't going to help much.  So go ahead and close down that tab your're using to google the word insurance.  And for those of you who I didn't catch in time, did the definition you located help much?  Probably not.  You're probably already googling one of the words in the definition that you also aren't quite sure of.  So stop right there.

I don't need google to tell you the definition of insurance.  But, mine is a sad story and I will make a brief digression in the hope that it serves as a cautionary tale for you.

The year was 1979.  Jimmy Carter was president (shudder).  Margaret Thatcher had just been elected prime minister.  Rocky II was in the theaters.  A little gadget called the Walkman had just entered the market.  I was in seventh grade and had just learned the definition of insurance.

 And by learned I mean I was forced to memorize the definition. My seventh grade teacher (whose name I can't remember ironically) made us memorize the definition of only one word.  That word was insurance.

Why insurance?  I have no idea.  We didn't do anything with the definition afterward.  Perhaps we were being punished for something we had done.  To this day I don't know.  But here's one thing that has stuck with me ever since:  the definition of the word insurance.

Insurance is a contract which guarantees against risk or loss.

It's seared into my brain much like Senator John Kerry's trip to Cambodia.
But, I still didn't understand what insurance was.  You can probably figure out why.

In fact, I can give you four reasons why I didn't understand what insurance was: 1. contract, 2. guarantee, 3. risk, and 4. loss.  To understand the concept of insurance I needed to understand the concept of contract, the concept of guarantee, the concept of risk, and the concept of loss.  These are all higher order concepts.  Insurance itself is a higher order concept -- a higher order concept whose four defining features are all higher order concepts.  Yikes.  Oh wait I already said that.

And as you premature googlers probably found out, even looking up the definitions for the four defining features probably makes matters worse.

So, what's the problem? The problem is that knowing all those definitions does not constitute concept knowledge.  Which is not to say that these definitions aren't useful to read or learn, at least initially.  But concept knowledge requires something more or something else.  Think about it.

And we'll get to the that  something else in the next post.  So take off the disco albums and platform shoes because we're returning to 2009.

(And, Dick, I promise we're almost at the dinosaurs.)