Heavy Lifting - thoughts and web finds by an economist
|I also contribute to Division of Labour||Load HL's Front Page|
Wednesday, August 16, 2006
The 2006 ACT results have been released and the news is supposedly good. While more students took the ACT than in the past, the average composite score increased 0.2 points over 2005. One wonders if this is a statistically significant result and whether it is an economically significant result. Moroever, how can this result be used by those who want to justify even more spending in K-12, but mainly in secondary education? It is difficult to know without the standard deviation of the test scores, so I moved to a second-best approach.
I gathered the data on average ACT score by state from 1994 through 2006. The data included the average score and the percentage of students who took the test. Two states during the sample period began requiring all students to take the ACT (Colorado and Illinois). I created a dummy variable for those states that required the ACT and regressed average score against REQUIRED, the percentage of students that took the test, and yearly dummy variables. I used a fixed effects estimator to control for state-specific unmeasured heterogeneity (of which there is a lot!).
Here's what we get:
How to read these results? Those states that required their students to take the ACT averaged 0.82 points lower in average score. This is not surprising as the population taking the test is considerably larger, forcing many students who would not have taken the test (presumably because they would not have scored as well, or were otherwise less inclined - perhaps because they weren't planning on attending college) to take the test.
Thus the true impact of, say, Colorado requiring the ACT is more accurately calculated by looking at both the impact of the REQUIRED variable (-0.827) and the impact of the higher percentage of students taking the test.
In 2003 the ACT became required in Colorado. In 2002, 99% of students took the ACT. However, in 2001, only 62% took the test (indeed for the sample years before 2001, the average was 63% taking the test in Colorado. Therefore, the requirement to take the ACT also added an approximately 38 additional percentage points to the number taking the test. Thus, the true impact of requiring the ACT was a -0.827 - 38x(-0.012) = -1.283 decline in the average score. Of course, we still don't know anything about the standard deviation, but it is anticipated that this would also increase. Thus, it is difficult to say anything about the coefficient of variation (mean divided by the standard deviation).
On to the time trends, which are represented by the _Iyear_XXXX variables. These coefficients indicate how the average score in the United States, after controlling for the percentage taking the test, whether a state required the ACT, and unmeasured state differences, differed from the average score in 1994.
For instance, in 1997, the average score was 0.075 points higher than in 1994 but the difference was not "statistically significant." In other words, we can't say with reasonable confidence, say at the 95% level, that the parameter 0.075 is actually different from zero. On the other hand, in 2000 the average score was 0.19 points higher than in 1994 and this value is statistically different from zero (notice the P>|t| - the p-value - is less than 0.05).
Indeed, the average score in 2006 was approximately 0.50 points higher than it was in 1994. As the average scores continue to approach the perfect score of 36, it is unlikely that the yearly changes would continue to increase, that is they will have to decline eventually if the trend towards perfect scores is to continue. Nevertheless, from the difference from 1994 in 2005 (0.38) and the difference from 1994 in 2006 (0.497) annual change from 2005 to 2006 can be determined as 0.497-0.381 = 0.115 points which is statistically different from zero (P-value of 0.019).
The ACT report claims an unconditional increase of 0.2 points, whereas my conditional results, admittedly not conditional on much, suggest a 0.115 point change. However, if testing whether 0.115 is statistically different from 0.2, yields a p-value of 0.087. This suggests that at the 95% confidence level we cannot claim that 0.115 is different from 0.2. However, at the 91% confidence level we can. I'll let the reader decide which way to go.
Notwithstanding some interesting time trends, the omitted variables, such as per-pupil spending, number of teachers, teacher-student ratio, etc., are numerous and can cause considerable problems. To some extent, the fixed effects estimator can account for many of these issues, but including more explanatory variables is generally better.
I gathered per-pupil current dollar spending for the years available (1994-2004) and converted the spending into 2004 dollars using the December CPI. Including real per-pupil spending in the original model yields the following results:
What do we see here? The parameter estimates on REQUIRED and PERTESTED change a little, suggesting that including real spending ed is useful. Indeed, more spending does correlate with higher average scores. The parameter estimate suggests that each additional (real) dollar spent per-pupil yields an increase in the average ACT score in the state of 0.000025 points.
The measure of spending used here does not qualify the type of spending - that is, is it on technology, teachers, administrators, junkets to Hawaii - but let's be real, this is a blog entry not a writeup for the American Economic Review.
There are likely all sorts of endogeneity problems with the specification, that is funding might be related to test scores, and vice versa, but we will address this in turn (see below).
As it appears there is a positive relationship between spending and test scores, many will be quick to suggest that we spend more and continue to increase test scores. Yet, there is likley some concavity in the relationship between spending and test scores, mainly because there is a natural limit to the test score (at 36). By including the square (quadratic) of per-capita spending we can test for curvature and estimate the amount of spending that would maximize average ACT scores.
Including the quadratic we obtain a parameter estimate on per-pupil spending of 0.0000956 and the parameter estimate on the quadratic of -1.33E-09. Using these two parameters, it is possible to estimate that amount of spending, conditional on the other variables in the model, that would maximize the average ACT score.
Sans the fancy calculus, this value is 0.0000956/(2x1.33E-09) = 35,953 (2004 dollars) per student. This is clearly not feasible in the current economic and political environment and even if it were it is likely that such spending would carry too high an opportunity cost. This suggests that maximizing the average test score is not likely to be a successful goal (although that won't stop people from trying).
Just to be complete, it is possible to estimate what the average ACT score would be, given that 60% of the students take the test voluntarily and spending was at $36,000:
21.10298 - 60*0.0219 + 36000*0.0000956 - 36000*36000*1.33E-09 = 21.82
In other words, the maximized average (given our results here) would be 21.82 points and the 95% confidence interval would be [21.30, 22.35]. This suggests that additional spending would have little economic impact on average ACT scores (although this is only one dimension of focus, there might be a lot of other issues that additional spending might influence in a more dramatic fashion).
There are concerns about the endogeneity of spending as it is likely related to test scores. Using fancy econometrics, I estimated a dynamic panel data model based on the Arellano-Bond specification. These results yield:
Arellano-Bond dynamic panel-data estimation, one-step difference GMM resultsI am trying the patience of the reader, I understand, but bear with me for just a little longer, if you have thus far. These results suggest that there might be a considerable bias in the impact of real spending on average ACT scores from 0.00006 to 0.0004 points return from the marginal dollar spent (560%!).
However, the dynamic panel data estimates suggest that average ACT scores might be maximized with approximately $30,000 (2004 dollars) per pupil and that the maximized . Although the bias is large, the ultimate conclusion that it is unlikely that we can spend enough to maximize average scores is not overturned.
Back to the original question. Is the increase in the ACT scores economically significant. My answer is that is not tremendously significant on its own. However, the long-term trend that the scores are increasing might be good news and suggests that the spending on education are not completely wasted.
[STATA data file]
Comments: Post a Comment
Le Chai - galerie du vin
Posts that contain Craig Depken per day for the last 90 days.
Heavy Lifting's Main Page
Money I Found Today
Heavy Lifting - Firehose style (56k warning)
- Is the ACT score increase economically significant?
Modified maystar design
powered by blogger