Online Data Archive

ACHIEVEMENT EFFECTS OF THE MILWAUKEE VOUCHER PROGRAM

John F. Witte

 

Please note: DPLS converted numerous MS Word documents as they were received to produce this HTML document. Footnotes are accessible via the Footnotes link. Some tables had to be scanned and are available as .GIF images. One table, Appendix Table C, is not available at this time.

 

Table of Contents:

 

 


ABSTRACT

 

This paper describes achievement test results from 1991 to 1994 for the Milwaukee Parental Choice Program (MPCP). Until recently the MPCP was the only education voucher program in the United States. The subject of the paper is how students enrolled in private schools with public vouchers performed on standardized tests in comparison to relevant control groups. Choice students, enrolled in private schools with vouchers, are initially compared to two control groups in this study: (1) a random, non-choosing sample of Milwaukee Public School (MPS) students; and (2) a sample of non-selected choice applicants who were randomly rejected from choice schools when particular schools were over subscribed (henceforth Rejects).

The Choice vs. MPS comparison indicates absolutely no differences in math and a weak advantage in reading for MPS students. The latter effect becomes statistically insignificant once we correct for missing test data.

The Choice vs. Rejects comparison finds no differences in reading. For math, however, Choice students do better than Rejects, especially in the third and fourth years. In one of the models, the differences between the groups in the third and fourth year are estimated to be approximately a half of a standard deviation in a single year. If this were accurate, it would clearly constitute a miraculous result for inner-city education.

I argue that the Reject comparison is invalid on several grounds. First, 52% of the Rejects never returned to MPS, and hence dropped out of the "experiment." The non-returning Reject students were from higher income, more educated families, thus leaving the few students in the Reject comparison group (who went back to MPS) as an initially underachieving group, which we would anticipate would have less potential for achievement gains in the future. Figure 1 demonstrates these differences for the "1990 cohort," which becomes the telltale fourth-year group. The Choice and MPS students are similar across the four years, while the Rejects begin considerably lower and decline further over time.

Second, the difference between Choice and the remaining Rejects can be explained by a few students (5 of 27) who scored the lowest possible math score on the Iowa Test of Basic Skills. Those scores were likely attained by simply not doing the test. When the lowest possible scores are eliminated for both Choice and Rejects, the fourth-year, math effect becomes statistically insignificant.

Finally, to test the commonality of Reject students to other students in MPS, I compared Rejects to the random sample of non-choosing MPS students who also qualified for the Choice program. The MPS students not only achieved higher than the Rejects, but the differences were greater than those for the Reject-Choice comparison. And the Rejects were attending school and being tested in MPS. This raises the question, who exactly are the Rejects? Or more precisely, to whom would we generalize these results even if they were valid? Certainly not to the MPS low-income students - who were eligible for the program but did not choose to participate. That group far outperformed the Rejects.

Thus the Reject-Choice result seems to be totally conditioned on the aberrant nature of the Rejects who remained in the "experiment." I argue the comparison is completely specious. Rather, the Choice and MPS student comparison remains the most appropriate comparison. That there is no essential difference between these groups is the most valid conclusion.

 

 


 

 

ACHIEVEMENT EFFECTS OF THE MILWAUKEE VOUCHER PROGRAM

 

 

 

 

John F. Witte

 

Department of Political Science

Robert La Follette Institute

University of Wisconsin-Madison

Madison, WI 53706

608-263-2029

email: witte@lafollette.wisc.edu

 

 

 

 

 

Revised 2/7/97

 

 

 

 

 

 

 

 

 

Paper presented at the 1997 American Economics Association Annual Meeting, The New Orleans Hilton Hotel, January 4-6, 1997.

 

 

 


 

 

 

ACHIEVEMENT EFFECTS
OF THE MILWAUKEE VOUCHER PROGRAM*

John F. Witte

 

I Introduction

This paper describes some key results of the only education voucher program in the United States - at least the only one for which we currently have any systematic information and data. The Milwaukee Parental Choice Program (MPCP) began in 1990. The paper reports results from the first four years of the program. Beginning in September 1996, a very similar program began in Cleveland, Ohio. The main difference between the programs is that the Cleveland program includes, and mostly enrolls, students in parochial schools, which to date have not been included in the Milwaukee program. Before focusing explicitly on the MPCP, it is useful to place both the program and the general idea of educational vouchers in historical and institutional context.

The General Theory. The two eternal issues of American education policy are : (1) How to improve achievement? and (2) Who will achieve at higher rates? The achievement issue was the focus of Milton Friedman’s original theory and policy proposal (1955, 1962). Friedman argued that neighborhood effects of quasi-monopolistic public education would lead to inefficiencies both on the consumption and production side. Consumers would be limited to local schools which might not be the best match or option; production would be characterized by classic monopoly overpricing and inefficiency. The answer was to provide all students with vouchers which were the equivalent of educational costs. They could be used in any school for the purchase of education. The results: more efficient production of education and a commensurate increase in student learning. The equity issue was not directly addressed.

The general theory was devoid of any specification of exactly how schools would produce superior results. If competition alone is the impetus for improvement, then the mechanisms need not be spelled out. Market iteration will sort out schools for consumers based on superior production. Curriculum, organization, pedagogy, and governance structures of the superior schools will survive, and presumably become the models for other schools.

Friedman’s original proposal contained few specifics, which subsequent scholars and policy experts have argued are crucial in defining a viable voucher program and, more important, defining who might benefit from such a program (Murnane, 1986; Levin, 1987; 1990). The design of a program will affect who benefits. A limited and targeted voucher program, such as the MPCP, may have the advantage of providing benefits for students most in need of help, while those same students might be harmed by a broad-based, unlimited voucher system. Those who foresee a broad market system, with the wonders of competition spread ubiquitously across the student population, argue either that most students will benefit, or at least that benefits will more closely match abilities. Those more pessimistic argue that existing inequities in our education system will grow because vouchers will allow for further stratification by socioeconomic status, race, and/or ability. Which way vouchers lead may depend on the existing private-school market.

Private Education in America. From the mid-nineteenth century, primary and secondary education in the United States has been primarily a public function. Private schools have enrolled between 12% and 15% of all students throughout most of this century. Private education expanded considerably following the National Conference of Catholic Bishops held in Baltimore in 1883. The incessant dominance of Protestants in the common (and public) school movement led the Catholic Bishops to require their followers to build their own school system and to attend exclusively Catholic schools. Catholic schools have been the mainstay of private education ever since. The Catholic dominance of private education peaked in 1965, then accounting for 90% of the religious private school market, and over 75% of the total private school enrollment. Since 1965, there has been a steady decline in Catholic enrollment, to slightly below 50% today. However, the religious private schools continue to enroll approximately 84% of all private-school students. The Catholic decline in the last thirty years has been made up by rapid growth of Baptist, Evangelical, and Pentecostal Christian schools. With the exception of schools serving the wealthy and a few independent private schools serving specialized groups or inner-city students, American private education has been, and continues to be, religiously based (Witte, 1996a).

This brief synopsis of private education is relevant because observers of the public-private school debate in America need to analyze both consumer and supplier assumptions in the educational "market." History suggests that neither the consumption nor production of education flow from price, quality, or profit considerations. Consumption and production in the public sector follow choices of where families live. Geographic location also dictates most of what happens in the private sector, with religion explaining most of the rest of the variance. Where one lives of course is a major factor of choice and is undoubtedly the primary competitive factor in education. Thus the exodus from central cities (and their schools) is the most important educational choice phenomenon of our time.

The premise of targeted, inner-city voucher programs is to extend the choices represented by geographic mobility, or purchase of a private-school haven, to those who cannot afford either. One cannot assume that such a policy will avoid entanglement with religion. And one also cannot assume that the benefits or harms of a limited program will extend to a wide-open voucher program. The latter issue has haunted me and others thinking about these types of programs. We have two concerns --- one political; one analytical. Politically the issue is whether programs like the MPCP are stalking horses for full-fledged voucher programs. Analytically the issue is whether the results of the Milwaukee experiment might generalize beyond the quite severe limitations of the program. The political issue is not a part of this paper (See Witte, 1996a); the analytical issue is addressed in the conclusion.

 

II The Milwaukee Voucher Program

The Initial Program. The Milwaukee Parental Choice Program, enacted in spring 1990, allowed students living in Milwaukee and meeting specific criteria, to attend private nonsectarian schools located in the city. For each Choice student, in lieu of tuition and fees, schools receive a payment from public funds equivalent to the Milwaukee Public School (MPS) per-member state-aid (estimated to be $4,200 in 1996-97). Students must come from families with incomes not exceeding 1.75 times the national poverty line. New Choice students initially could not have been in private schools in the prior year or in public schools in districts other than MPS. The total number of Choice students in any year was limited to 1% of the MPS membership in the first four years, but was increased to 1.5% beginning with the 1994-95 school year.

Schools initially had to limit Choice students to 49% of their total enrollment. The legislature increased that to 65% beginning in 1994-95. Schools must admit Choice students without discrimination based on race, ethnicity, or prior school performance (as specified in s. 118.13, Wisconsin Statutes.). Both the statute and administrative rules specify that pupils must be "accepted on a random basis." This has been interpreted to mean that if a school was oversubscribed in a grade, random selection is required in that grade. In addition, in situations in which one child from a family attended the school, a sibling was exempt from random selection even if random selection was required in the child's grade.

The New Program. The legislation was amended as part of the biennial state budget in June 1995. The principal changes were: (1) to allow religious schools to enter the program; (2) to allow students in grades kindergarten through grade three who were already attending private schools to be eligible for the program; (3) to increase the number of students allowed in the program over three years to a maximum of 15,000 students (from approximately 1500 allowed prior to 1995); (4) to allow 100% of students in a school to Choice students; and (5) to eliminate all funding for data collection and evaluations (the Legislative Audit Bureau is required simply to file a report by the year 2000). Thus unless the legislation changes, data of the type collected for this paper and previous reports will not be available for the report to be submitted in the year 2000. The evidence reported in this paper is based on the initial program, with the slight modifications to student caps enacted in 1993.

Legal Challenges and the Current Status of the Program. The original program was challenged immediately upon enactment as violating the Wisconsin Constitution. The circuit court denied those challenges in August 1990, and also exempted the private schools from complying with the Wisconsin All Handicapped Children Act. This means that the private schools need not admit any disabled students. The circuit court ruling was overturned by the appeals court in November 1991. But on a 4 to 3 decision, the Wisconsin Supreme Court upheld the constitutionality of the statute in March 1992.

The legislative changes enacted in 1995 were also quickly challenged in court as violating the First Amendment to the U.S. Constitution as well as violating several provisions of the Wisconsin Constitution. The governor asked that the case be remanded immediately to the Wisconsin Supreme Court to expedite the process. Both sides agreed and the court heard the case in the spring of 1996. The court decision was split 3-3. With the split vote, the case was remanded back to Dane County Circuit Court. Oral arguments were heard in August 1996. The judge was asked to rule on an injunction to halt the program until all issues could be resolved. He ruled instead that as the case was going through appeal, all aspects of the new program would be in place, except that parochial schools would not be allowed to participate. A final hearing in the Circuit Court was held December 23, 1996 with a ruling expected in early January 1997. Whatever the ruling, it will be appealed. During the appeals process (usually three to four years), the program remains limited to secular private schools, and there will be no further evaluation of the program.

Enrollment in the Choice Program. Enrollment statistics for the Choice program are provided in Table 1. Enrollment in the Choice program increased steadily but slowly, never reaching the maximum number allowed by the law. September enrollments were 341, 521, 620, 742, and 830 from 1990-91 through 1994-95. The number of schools participating were: 7 in 1990-91, 6 in 1991-92, 11 in 1992-93, and 12 in the last two years. The number of applications also increased, with again the largest increase in 1992-93. In 1993-94 and 1994-95, however, applications leveled off at a little over 1,000 per year. Applications exceeded the number of available seats (as determined by the private schools) by 171, 143, 307, 238 and 64 from 1990-91 through 1994-95. Some of these students eventually filled seats of students who were accepted but did not actually enroll. The number of seats available exceeded the number of students enrolled because of a mismatch between applicant grades and seats available by grade. It is difficult to determine how many more applications would have been made if more schools participated and more seats were available. In 1992-93, when the number of participating schools increased from 6 to 11, applications rose by 45%. From 1993 to 1995, however, seats available increased by 22% and 21%, but applications increased only by 5% from 1992-93 to 1993-94 and declined in 1994-95. Thus it is hard to argue that for most of the 95,000 students in MPS, the MPCP was seen as the road to educational salvation.

Enrollment in the Choice schools was far from even. Although by 1994 twelve schools were in the program (three schools, two with almost all African American students and one with over 90% Hispanic students) accounted for over 90% of the students who comprise this analysis. All of these schools had histories extending back at least thirty years. All were former Catholic schools which had been made into "community schools," when the Milwaukee diocese gave them up in the late 1960s.

The other schools in the program were either limited-enrollment Montessori, (mostly) pre-school programs, or alternative middle- or high-school programs for at-risk students. Other than the primary three schools, which had traditionally served inner-city students, other schools in the program were "cautious" in the slots they allocated to Choice students.

 

III Research Issues, Designs and Data

Research Issues. The MPCP does not provide an adequate test of the overall claims for educational voucher systems. Those claims rest on a fully autonomous educational system in which consumers have more or less unconstrained choices and suppliers are numerous and able to respond freely to market demands. The MPCP was too small and too constrained to provide even the roughest approximation of a test of that theory.

However, there are two other, not trivial policy issues for which the program can provide useful information. The first issue is, who will choose? Which inner-city families will take advantage of the increased opportunities and apply for the private-school options? The second issue is, what are the results in terms of educational outcomes?

The answer to the first question is definitive, although the interpretation is open to debate. Annual reports and analyses have produced a consistent pattern of who applied to the MPCP, and why (Witte et al., 1994, 1995; Witte and Thorn, 1996). The profile of families both applying and enrolling in Choice schools is consistent from year to year. They were very poor (under $12,000 annual income; 60% on AFDC); mostly single, female-headed families (75%); and most likely to be Black (74%) or Hispanic (19%). They were also very dissatisfied with the prior public schools their children attended. In addition, their children were not doing well in those schools (based on both prior test data and behavioral indicators). On the other hand, the parents of the Choice applicants had considerably more education than the average MPS parent and, on all measures of parental involvement, were more involved in their prior public schools. They also had higher educational expectations for their children and viewed education as more important than the average MPS parent (Witte and Thorn, 1996).

The interpretation of these results is not simple. For those who value opportunity and believe that at least some inner-city families are trapped in schools that are not beneficial for their children, the MPCP provides an option not available in any other way. Unlike the majority of American families, they do not have the option of moving to another (suburban) school district or purchasing "better" private-school education. On the other hand, these families are headed by more highly educated parents, who value and are engaged in their child’s education. Thus, Albert Hirschman might consider them the ideal "voice" parent (Hirschman, 1970).

The second question (what are the educational outcomes of the MPCP?) is much more complex and controversial. The issues involve definitions and measurement of outcomes, time frames, and causal modeling of factors influencing achievement. Most researchers begin a discussion of student educational outcomes with a call for broad measures (content and norm-referenced tests, attainment, behavior, and attitudes and educational preparation for future education). Some of the wiser scholars are also concerned with system effects - such as the health and funding of educational institutions. In the vast majority of studies, however, outcomes are reduced to test scores on multiple-choice, standardized tests. This paper follows the latter tradition, but past and future work is fortunately not so limited.

The specific question which is the subject of the remainder of this paper is how did students enrolled in private schools with public vouchers perform on standardized tests (Iowa Tests of Basic Skills) in comparison to relevant control groups?

Research Design. Choice students, enrolled in private schools with vouchers, are initially compared to two control groups in this study: (1) a random, non-choosing sample of Milwaukee Public School (MPS) students; and (2) a sample of non-selected Choice applicants who were "randomly" rejected from Choice schools when particular schools were over subscribed (henceforth Rejects).

Theoretically, each of the groups provides useful comparisons. The MPS sample gives us the needed comparative information on who applies. It also is a critical reference group for the achievement issue if we want to generalize to the population of potential applicants. The Rejects offer a potentially useful comparison on achievement because they may provide a natural experiment in which unmeasured selection bias is eliminated. Studies of public-private school achievement have been plagued by the fear (for which there is modest empirical evidence) that selection into private or select schools is not able to account for unmeasured factors affecting both selection and achievement in private schools (Witte, 1992; 1996b; Gamoran , 1996). The Rejects, in theory, eliminate this factor in achievement estimation models.

The design presented relies exclusively on "value-added" approaches to estimating achievement gains. Value-added models are based on the assumption that educational systems (classrooms, schools, etc.) should be judged on the marginal educational product they produce. In practice this means either using some form of change-score measure or modeling achievement with the inclusion of prior achievement as an independent variable. The latter is used in this study. For simplicity, and to better explain results to wider audiences, we rely in this analysis on year-to-year achievement estimates.

Data. The study on which this paper is based employed a number of methodological approaches. Surveys were mailed in the fall of each year from 1990 to 1995 to all parents who applied for enrollment in one of the Choice schools. Similar surveys were sent in May and June of 1991 to a random sample of 5,474 parents of students in the Milwaukee Public Schools. Among other purposes, the surveys were intended to assess parent knowledge of and evaluation of the Choice program, educational experiences in prior public schools, the extent of parental involvement in prior MPS schools, and the importance of education and the expectations parents hold for their children. We also obtained detailed demographic information on family members. A follow-up survey of Choice parents assessing attitudes relating to their year in private schools was mailed in June of each year. Survey response rates are reported in Appendix Table D.

Survey data were augmented with data from the MPS Student Record Data Base (SRDB). This included demographic data, enrollment and school data, and linkages to test files. Students in all three of the relevant samples may have had MPS data. Choice andReject students may have had prior information in MPS; Rejects could have had information in MPS files following their rejection form the program (assuming they returned to MPS - which many did not) . Other than survey information, MPS student information came solely from the SRDB and MPS test files.

Test data are from the Iowa Tests of Basic Skills (ITBS) for students from kindergarten through the eighth grade. No high school students are included in the analysis because no traditional high schools participated in the MPCP. Please note this as another limitation of the study. The two test statistics we report are reading ability and "comprehensive math." We rely on Normal Curve Equivalents which are normalized National Percentile Rankings. The scores range from 1 to 99, with a mean of 50.

In addition, detailed case studies were completed in April 1991 in the four private schools that enrolled the majority of the Choice students. An additional study was completed in 1992, and six more case studies in the spring of 1993. Case studies of the K-8 schools involved approximately 30 person-days in the schools, including 56 hours of classroom observation and interviews with nearly all of the teachers and administrators in the schools. Smaller schools required less time. Researchers also attended and observed parent and community group meetings, and Board of Director meetings for several schools. Results of these case studies were included in the December 1994 report.

Finally, beginning in the fall of 1992 and continuing through the fall of 1994, brief mail and phone surveys were completed with as many parents as we could find who chose not to have their children continue in the program. These brief surveys identified why students were leaving the Choice schools and where they were subsequently attending school.

In accordance with normal research protocols and agreement with the private schools, to maintain student confidentiality, reported results are aggregated and schools are not individually identified.

Analysis. The data presented below follow several consistent, but redundant patterns. I begin by presenting data from what I consider the main analysis - comparing Choice students with the MPS control group. That is followed by an analysis of Choice compared to Rejects. The issues are: (1) What is the mean treatment effect of being in the Choice schools? and (2) what are the "trend" effects of being in the program or treatment over time? I model these effects quite simply. All estimates include prior tests for both reading and math from the year before the test in question. Additional control variables are not presented in outcome tables. However, to indicate their effects, Appendix Table B provides a full model for the Choice-MPS comparison.

The first models presented include only the mean treatment effect, with no trend variables. The second and third models provide trend effects in two ways: (1) as a linear trend of being in the treatment group, while controlling for the time-in-program for all subjects; (2) as a series of indicator variables measuring the years in treatment, with the time-in-program variable also included.

The three different ways of analyzing "treatments" are compounded by the different levels of data presented. We have SRDB data on almost all of the people in the program. However, less than 50% responded to our surveys, which provided much richer data on demographics, behavior, and attitudes. Thus the analysis which follows is split between analyses based solely on SRDB data and estimates based both on Full Variables, which includes both SRDB data and data from surveys. Note the resulting differences in sample sizes.

Finally, several attempts to control for selection bias effects - both into and out of the program - were employed. Several methods failed for reasons described below, and those which were used seemed to have surprisingly little effect.

 

III ACHIEVEMENT OUTCOMES

The CHOICE vs MPS Samples. Most of the published reports on the MPCP to date have focused on the comparison between the students who enrolled in private schools under the Choice program and the random selection of non-choosers in MPS. The latter were selected in spring 1991. For purposes of this study MPS students begin with their test and SRDB records in1990. Choice students entered the program as they desired. The analysis below is thus based on program years from 1990 on for MPS students and from the year of application to Choice for the Choice students.

The results of comparing Choice and MPS students are presented in Tables 2 and 3. In each of the tables, different methods of presenting treatment and trend results are portrayed. One of the lingering questions in this debate is, what is the appropriate control group of MPS students? Because the MPCP limits participation to families with income 175% of the poverty line or less, a comparable income control group in MPS would be those students who qualify for free (135% of the poverty line), or reduced-price lunch (185% of the poverty line). Using only free-lunch qualified MPS students is a better current qualification match but, depending on whom we wish to generalize to, not necessarily a better comparison group. Using the broader group, one could also include a control variable for free lunch qualification. We have run the results both ways, and there is very little difference. Thus the results presented for the remainder of the paper include only free-lunch qualified students in the MPS sample. Since all but 3% of the MPS students who qualify for free lunch qualify for full free lunch, that group of students would almost universally qualify for the MPCP. A table replicating the results of Table 2, including all MPS students and a dummy variable for free lunch, is included as Appendix Table B.

Table 2 indicates the "treatment" effects (of students being in a Choice school) for two levels of data and for three different models of treatment effects. The top panel only includes SRDB variables as control variables. They include: race dummies, gender, student grade, and prior reading and math tests (one year prior). The bottom panel includes a richer set of variables based on survey data (note the subsequent decline in sample size). The three models include: (1) a mean treatment effect across all four years (Treat) with no trend variables included; (2) a mean treatment effect with additional linear treatment effects (Treatxyr), plus a variable to control for the time in program of all students (Yrsinprg); and (3) the treatment trend broken into a series of discrete dummy variables representing the student’s year in Choice (Treatyr1, Treatyr2, etc.). Means and standard deviations for all variables are included in Appendix Table A.

Because we have "stacked" the data, which means that a single student may be included in more than one year, there is a question of whether the observations are truly independent. The lack of independence affects the estimates of the standard errors both in terms of autocorrelation and heteroskedasticity. To avoid this potential problem, robust estimates of standard errors are included in all relevant tables (Huber, 1967; White, 1980). In all cases the adjustments to the OLS standard errors are extremely small.

The most obvious conclusion from the upper panel of Table 2 is that none of the coefficients vary greatly from zero, and none even approach conventional levels of significance in testing whether the effects deviate from 0. The quality of the models is better captured in Appendix Table B, which includes the coefficients for the remaining independent variables and an indicator variable for free-lunch qualification. Prior tests are always highly significant predictors of post-tests, and race and gender act as expected. The former pulls scores down for minority students, especially Blacks, and girls tend to be better readers, but not better in math. Grade usually has a negative coefficient, signifying that inner-city students decline relative to national norms over time. That is picked up and correlated with years in the program. A comparison of the treatment effects in Table 2 and Appendix Table B also indicates that it is more or less irrelevant if we include all MPS students (and a control variable for free lunch) or just the free-lunch qualified students in the analysis.

The bottom half of the Table 2 provides an anomaly to this "no difference" finding. The coefficient on reading, given the full set of control variables, is negative in all three models. The mean treatment effect is -.1.75, with a t-value of -2.5. In the trend models (column 2) it remains significant, although it would be offset by the positive years-in- treatment coefficient (which is not significant). Finally, it is quite clear in column 3 that the negative impact on Choice students occurs in the first two years. The reading results for Choice students improve after 3 and 4 years in the schools, although their estimated scores are no better than MPS students.

Why the difference between the SRDB and the Full Variable sample? The answer may be that Choice parents, while more disgruntled, were also more educated and had higher expectations for their children. Once these are controlled for, the expected test results of their children are higher than with the SRDB variables alone. This is offset by the almost universal positive effect of family income, which is lower in the Choice households even after we exclude non-free lunch students from MPS.

Because there is enormous potential for selection bias both coming into this program and leaving it (defined in this study as not being tested in a given year), we have conducted several tests on all the relevant samples. Selection into the program was difficult to model because the normal instrumental variables did not work in the this context. The two most often used and cited variables as instruments for school selection are school distance (Kane and Rouse, 1993, for college enrollment) and religion (Hoxbie, 1996). This program, being non-sectarian, nullified the latter, and the court-ordered bussing program in Milwaukee messed up the former. Inclusion of various models with Inverse Mills ratios in the first order selection equation had no effect once the other control variables were included. In part this may be due to the requirements of value-added assessment. Prior tests, in combination with other control variables, may simply offset any potential Choice selection effects.

I was more successful in modeling selection "out of the program," defined in any year as not having a post-test. However, the results of the corrections were modest. The absence of test data for Choice and MPS students occurred in a number of ways. First, for Choice students there was a high and sustained attrition from the Choice schools. On average, 30% of the students left each year (Witte et al., 1996; Wis. Legislative Audit Bureau, 1995). This included the bankruptcy and closing of one school during this study and two more private schools which closed in the 1995-96 academic year. In addition, some students in private schools were not tested either because they were in kindergarten, first grade (some first graders were tested), or because they missed the test day.

An earlier study, published in "The Fourth Year Report," analyzed attrition from the Choice sample over the four years of this study. Both descriptive data (Witte et al., 1994, Table 18) and a multivariate logit model (Witte, et al., 1994, Table D1) confirmed that students leaving Choice schools were from higher grades, doing less well on math, and their parents expressed much higher levels of dissatisfaction with the Choice schools than the parents of students remaining in the schools. These factors could clearly affect results over time, with the "better" Choice students likely to stay the course into the highly touted third and fourth years. A mitigating factor is that by statutory requirement, Choice students had to be tested every year, and, with the grade exceptions noted above, there were probably few exceptions.

Attrition from the MPS sample was due either to moving out of the system or not being tested in a given year. The latter was much more prevalent in that Milwaukee only tested all students in the 2nd, 5th, and 7th grades. However, in compliance with Chapter 1 requirements, students qualifying for Chapter 1 aid were required to be tested every year. In addition, schools with a large majority of Chapter 1 students often tested the whole school. Because Chapter 1 is means-tested, poor, minority students were more likely to be included in MPS testing. Combined with the upward SES bias of those students leaving the MPS system, missing tests could introduce a major set of biases in the Choice - MPS comparison. The dual effects of retaining the better students in Choice, while "overtesting" and retaining the weaker students in MPS, would suggest that the combined biases would work to the disadvantage of MPS students.

One test of the general problem of missing data from the model is provided by using two-stage Heckman correction models to model missing data and then re-estimate the OLS regressions (Heckman, 1979). That procedure seemed to work well, with first- stage results (available from the author) indicating that more treatment group tests were available than MPS tests, and that Black and Hispanic students from lower-income, less educated families were more likely to be tested.

However, the effects on the re-estimated models, depicted in Table 3, are modest. The one effect of note is that the reading results which favored MPS students in the Full Variables model are no longer significant (lower panel, columns 1-3, Tables 2 and 3). Thus the adverse selection out of Choice may be offset by the fact that almost all students in Choice were tested every year.

In summary, the Choice vs MPS comparison indicates absolutely no differences in math and a weak advantage in reading for MPS students. The latter effect becomes statistically insignificant once we correct for missing test data.

The CHOICE vs REJECTS Samples. As noted in the research design section above, the Rejects hold out great potential as a control group in that theoretically they provide a natural control on selection bias. That is, they should possess the same propensity for unmeasured characteristics affecting educational outcomes as the Choice students. The randomization thus creates a potential natural experiment which we had hoped to exploit from the beginning of this project.

We have not previously explored this comparison in detail because from the very beginning we were aware of several problems with "the experiment." The problems are numerous enough that a list will be more concise.

    1. Random selection was used only for students in certain grades in particular schools where there was over subscription. For example, one African American school usually admitted everyone, while the other Black school always had waiting lists.
    2.  
    3. There was a "sibling" rule which meant that if an applicant had a sibling already in the school they were exempt from the lottery. No data exist on who was admitted under the rule.
    4.  
    5. After the first year, students were allowed to enter from waiting lists after the beginning of the year. No formal rules existed as to how those lists were maintained.
    6.  
    7. There was no oversight of the random selection process, or accounting in any form of students rejected because of disabilities.
    8.  
    9. The actual numbers of Rejects was small, especially in the first two years.
    10.  
    11. A significant proportion (52%) of rejected students disappeared for programmatic purposes - meaning there was no subsequent test information on them. Thus, if this is to be compared to a medical experiment, over half never got the placebo. As will be shown below, those who exited were hardly a random sample of all rejects.

 

Despite the limitations, this comparison clearly requires attention, especially given the enormous public relations demonstration over a paper released in August1996 touting the enormous "successes" of the MPCP (Greene et al., 1996). Based solely on comparisons with the Reject sample, that paper proclaimed undeniable advantages for Choice students who remained in the program for three and, especially, four years. The lead author, Paul Peterson, repeatedly reiterated one of the paper’s conclusions: "If similar success could be achieved for all minority students nationwide, it could close the gap separating white and minority test scores by somewhere between one-third and one-half." (Greene et al., 1996, p. 4) Peterson later changed the closing phrase in most news stories: "to one-half or more." This generalization was based on a fourth-year math score estimated for Choice versus Reject students. Peterson’s paper was released two days before he appeared in Wisconsin, in Dane County Circuit Court, to testify that the MPCP was after all succeeding in improving the achievement levels of Choice students..

My approach to the analysis is different from the Peterson/Green approach, but the results presented in Table 4 have some similarities. As in the MPS/Choice comparison, reading seems to be a wash in all analyses. None of the treatment or trend coefficients in either the SRDB or Full Variable analyses approaches a level of probability that would allow concluding that they differ from zero.

Math is somewhat different. For the reduced variable set, with a larger sample size, the straight treatment effect, (column 4) is significant and indicates that Choice students are on average 2% better than Reject students. When treatment and program trends are taken into consideration (column 5) nothing remains close to significant. When year dummies are used to represent treatment effects, only the third-year effect is significant. The coefficients for years 2 to 4, however, are all positive and over 2.0 NCE’s.

When the full set of variables are used as controls, including family income, mother’s education, employment, marital status and expectations for their child’s future education, the treatment coefficients increase considerably. The straight treatment effect (column 4, lower panel) more than doubles; the year-treatment trend variable (column 5), which denotes the linear gain per year for Choice students, is estimated at a remarkable 3.25 with a standard error of 1.6; and the year treatment dummy variables (column 6) go straight up from 4.4 after two years to 8.5 and 10.9 in the third and fourth years. This is the finding that has been held up as the savior of minority education in America. It has been reproduced here using a different methodology in other research. Thus there is something in the math results for the Choice-Reject comparison for those students who stay four years, and maybe three.

Why the Discrepancy Between Comparison Groups? I have few doubts about the MPS-Choice results. Although there is a slight advantage noted for reading under the Full Variable model, that tends to disappear when a selection correction is employed. In other reports, the results have also been shown to be very consistent from year to year (Witte et al., 1994). Also the ranges of coefficients do not send signals that something is simply out the expected range of change. That is not the case with the math result for the Choice-Reject comparison. Ten points is over one-half of a standard deviation increase in one year. Obviously gaps would be closed very fast if such advances were to be sustained. However, that degree of change is also between five and ten times larger than any similarly reported achievement gains based on public and private school comparisons in national databases (Witte, 1992, 1996). Common sense suggests that for large inner-city populations the educational problems are deep and sticky, and that one needs to be suspicious of these types of gains. And I believe that is the case here as well.

I question the Choice-Reject math results based on three problems. First, we need to return to the selection problems outlined above. There are several troubling aspects. The small sample size is an obvious issue, particularly when we are focusing primarily on single-year effects. In my data in year four there were 85 four-year Choice students and only 27 Rejects (who have pre- and post-tests in 1994) remaining from the "1990 cohort." We return below to the problems and effects of small samples.

A second issue is the initial loss of Reject students from the control group. Were the remaining students random Rejects, or were those who left the experiment systematically different? And, if different, were the differences likely to affect future achievement?

Third, the selection processes were not monitored. For example, the Choice schools were not required, by court ruling, to admit disabled students. It was never clear at what point they exercised this option. No reporting on students with disabilities was required. Given that these schools worked often with slower children and because many applicants were very young, it is not clear how many students were "pre-selected" out based on disability status. However, the schools did fight (and won) in court to retain the prerogative to exclude disabled students. And the attrition from Choice, described above, was clearly not random. As we further dissect the Reject group in the future, we will be exploring their disability status.

Finally, there is the question of what the results would mean even if they did hold up. The issue is, who are the Reject students? and therefore, to whom can the "Choice triumph" in math be compared?

I look at selection and samples size problems in two ways. First, I focus on the 52% of the Reject students who walked away from the experiment. Because most Rejects were very young, there is little prior test or SRDB information on them. However, all applicants were sent surveys, so information exists on both those who later returned to MPS and those who did not. A logistic regression estimating the characteristics of Reject students who "remained" in the experiment --meaning returned to MPS and had any subsequent test record -- is portrayed in Table 5.

The results are not trivial. Although, with reduced sample sizes, not all coefficients are significant at the conventional .05 level of probability, the direction of all of the coefficients indicate that the rejected students who returned to MPS were: (1) poorer; (2) in higher grades; and (3) from families whose parents were likely to be less educated and were less involved in their children’s education than students who disappeared into private schools or other public school districts. This makes sense. Rejects were looking to leave MPS in the first place. If not selected for Choice, and if they had the means (and especially if their children were young), they left for private schools -- either on their own or with the help of the PAVE program; or they went to another public school district. Thus the Reject "control group" which remains behind in MPS is hardly a random sample of those who applied and were rejected. In fact, they are very likely to be an educationally weak representation of that group.

Small samples are a second problem with the Choice-Reject comparison, especially when the results focus on one or two years. In such a situation, the scores of a few students could influence the general results. And, on closer investigation, that was clearly the case in the data presented in Table 4. As one might anticipate, the Rejects were the problem. Of the 27 Reject students who were included in the SRDB panel of Table 4, 5 students (18.5%) received a score of 1 on the math test. One NCE is the lowest recorded score on the Iowa Test of Basic Skills. It often translates into a student simply not filling in the dots on the test form. There were no similar 1 scores in the Choice schools. The lowest Choice score (of 85 in the fourth year) was 4. Thus to test the sensitivity of the models in Table 4, we re-estimated the results taking out the students from both groups who had scores less than 5 NCE’s on either reading or math. Comparing the N’s in Tables 4 and 6, this meant we dropped 29 and 43 records for reading and math respectively for the SRDB models. For the Full Variable models we dropped 9 reading and 24 math scores. For the fourth year, math with Full Variables – where we get the big kick in Table 4 -- that meant dropping 5 Reject and 2 Choice students.

The results are quite extraordinary. First, the reading estimates are unaffected - still no signs of life there. For Math, both the SRDB and the Full Variables panels are considerably affected. For the larger SRDB sample, the average treatment effect in Table 4 (column. 4) was 2.0 points and was significant at conventional levels. With the lowest scoring students removed from both samples, it is insignificant and drops to .9 NCE’s. Similarly, the significant trend effect in the SRDB model (for Treatxyr) drops from 3.7 to 1.6 and is not even close to significant. The same shifts occur using the Full Variables model. The overall treatment effect is knocked out, and the only trend effect that remains is the third-year effect. The coefficient representing the big fourth-year finish is reduced by 40% and is no longer significant by conventional standards. The latter is accomplished by eliminating 7 students who scored the lowest possible score on the test. Given that such a score is often achieved by students who only put their name on the tests, the lesson from all this, and the private schools may already understand it, is that we should make sure all students fill in the dots.

A final way of looking further at the Rejects as a treatment group is to compare them to the MPS random sample. If the main difference between Choice and Rejects noted above was the superiority of Choice students, and one misses our comparisons between Choice and MPS as bogus, one would conclude that the Rejects got more or less the same value-added education as the average MPS student. One would hypothesize that the Reject-MPS comparison should be a wash. That is the logical extension of the claims made by Choice advocates following the Greene/Peterson study.

However, as revealed in Table 7, that is certainly not the case. In this table, with Rejects being the treatment group, negative coefficients mean the MPS students did better than Rejects. Again, for the critical math scores, there are many large, significant, negative coefficients. A comparison with Table 4 is instructive. Perhaps the most remarkable results are for the SRDB models in the top panel. The general treatment effect is worse for Rejects when compared to MPS than when Rejects were compared to Choice (column 4). In addition, a first-year effect emerges, and the fourth-year effect, which was not significant in Table 4, is over 8 points in Table 7. With the full model, all of these coefficients at least double, and the fourth-year effect of being a random, non-choosing MPS student is now an amazing 17.38 points math advantage over our poor Reject group. Obviously the same types of low-scoring students’ biases which affected the Choice-Reject comparison also affect these results.

A re-estimation, again excluding students with scores below 5 NCE’s, is included as Appendix Table C. As expected, many of the differences become statistically insignificant. However, the fourth-year math effect, with Full Variables (Table C, bottom panel, column 6), remains "robust" --13.3 NCE’s--and is significant at the .001 level.

Thus fourth-year Rejects do very poorly in comparison to their own MPS classmates. And they do even worse in that comparison than when compared to the Choice students. What is happening to the fourth year students (the "1990 Cohort") can be seen in Figure 1. The figure depicts the original 1990 students in each group as they are tested across the years. As is apparent, the differences between groups are a function of the poor performance of the Rejects - not a dramatic improvement in either Choice or MPS cohorts. The Rejects, as might be predicted from the initial selection out of the Reject category by those who did not return to MPS, start off worse than both Choice and MPS students. And their later scores decline. It should be remembered that the N’s for the 1990 cohort of Rejects are 35, 37,21, and 27 for the four test years from 1991 to 1994. Thus the "math phenomenon" is conditioned on the Rejects, not on anything happening in either the Choice schools or the larger MPS system.

 

IV CONCLUSIONS

The empirical results of this study cannot be summarized simply. In terms of the narrow issues of the effects of the MPCP on standardized test results, there is little consistency between samples and subjects. For reading results, the comparison of Choice and MPS samples provides modest evidence that the public school students do marginally better. In comparing Choice to Reject students, the reading results are absolutely flat under any specification. For the Choice-MPS comparison there is no general treatment or trend effects under the SRDB models, but there is when more refined survey variables are added. However, selection out corrections render these differences statistically insignificant.

For math, the story shifts. For the Choice-MPS comparison there is not a hint of difference in this study. But for the Choice-Reject comparison, I find a math effect favoring Choice schools, an effect which accelerates if students remain long enough in the Choice schools.

I challenged that result, however, based on a number of questions. First, there was attrition from both the treatment (Choice) and comparison (Reject) groups. The worst students seemed to leave the Choice schools, leaving those in the third and fourth year as a select group. But more important is the attrition from the Rejects. This occurred almost immediately following rejection from the program for 52% of the rejected students. Our data suggest that the leaving Rejects were more likely to have characteristics associated with higher achieving students than those who returned to MPS and remained in the control group. I also found that by simply eliminating those students who scored at the very bottom in either group, almost all the math differences favoring Choice disappear. This suggests a very fragile evidentiary basis for even concluding the Choice students do better than the Rejects regardless of the selection problems. Finally, when we compared the Rejects to the random MPS sample, the Rejects did worse than they did against the Choice students. There is little evidence of increased improvement in either the Choice or MPS samples, but there is evidence of very poor performance from the beginning for the Rejected students. The simple conclusion is that the Reject sample is a small and very poor comparison group. They clearly are not a random sample of Rejects, or of Choice, or of MPS students.

My first conclusion is to challenge the proclamation that the MPCP could be the vehicle for reducing the gap between white and minority students by a very large amount. Even the best evidence (the 10 point miracle) is so fragile that such pronouncements are frivolous (throwing out seven students reduces the effect to a statistically insignificant 6 points). Second, the myriad selection problems with both groups indicate the evidence may well be totally spurious. Third, even if the evidence held up, we have little idea to whom it would apply. Clearly the Choice "miracle" would not carry over to even the low-income students in MPS. They do as well as the Choice students on Math, perhaps slightly better on Reading, and do even better than the Choice students when compared to Reject on the hyped math test. We also know that the Reject control group which remained in the study was an even poorer subset of an already very poor population of students. All of these factors suggest the need for caution and modesty in generalizing these results.

After reanalyzing these data from scratch, and adding a number of analyses, my conclusions remain the same as those published earlier. The MPCP provided an opportunity for an alternative education for families who were not satisfied with public schools and whose children were not exceling in those schools. Their subsequent satisfaction increased. Further, the subsidies to the private schools allowed several schools to survive and later flourish. (Witte et al., 1995) For me this is a positive result given the range of alternatives facing some inner-city students. This is enough to support a limited-voucher program and that is what I have done over the last five years.

On the other hand, I reiterate that there is very little if any relevant evidence to bring to bear on the prospect of a full-scale, unlimited, voucher program. Nothing from Milwaukee should be carried over to this type of program. And, based on the current structure of private school use, there is no indication that such programs would aid the poor at all. The more likely outcome is that they would simply subsidize current private schools, which are mostly religious, and serve primarily upper-middle class families.

 

 


 

 

TABLE 1. PARTICICPATION AND ATTRITION FROM THE CHOICE PROGRAM: 1990-95

 

1990-91

1991-92

1992-93

1993-94

1994-95

Number of students allowed in the Choice Program (limited to 1% of MPS enrollment) / 1.5% 1994-95

931

946

950

968

1450

Number of private non-sectarian schools in Milwaukee

22

22

23

23

23

Number of schools participating

7

6

11

12

12

Number of applications

577

689

998

1049

1046

Number of available seats

406

546

691

811

982

Number of students participating          
September count

341

521

620

742

830

January count

259

526

586

701

--

June count

249

477

570

671

--

Graduating students

8

28

32

42

45

Number of returning Choice students

NA

178

323

405

491

Attrition rate(16)

.46

.35

.31

.27

.28

Attrition rate without alternative schools

.44(17)

.32

.28

.23

.24

 


 

 

 

TABLE 2. REGRESSION RESULTS, 1991-1994, CHOICE (Treatment) and MPS.(18)

 

SRDB ONLY

READING MATH

(1)

(2)

(3)

(4)

(5)

(6)

Treatment

-.554

-.465

--

-.173

-.465

-

(.467)

(1.154)

--

(.515)

(1.154)

--

Treat*Yrs

--

-.074

--

--

-.074

--

--

(.475)

--

--

(.475)

--

YrsInPrg.

--

-.353

-.354

--

-.353

-.334

--

(.196)

(.197)

--

(.196)

(.222)

TreatYR1

--

--

-.357

--

--

-1.169

--

--

(.892)

--

--

(.933)

TreatYR2

--

--

-.980

--

--

-.294

--

--

(.687)

--

--

(.807)

TreatYR3

--

--

-.258

--

--

.959

--

--

(.944)

--

--

(.929)

TreatYR4

--

--

-1.045

--

--

-.425

--

--

(1.228)

--

--

(1.572)

R2

.391

.392

.392

.436

.437

.437

MSE

12.481

12.479

12.481

13.681

13.680

13.681

Reading: N=4019 Math: N=3967

 

FULL VARIABLE SET READING MATH

(1)

(2)

(3)

(4)

(5)

(6)

Treatment

-1.745*

-3.772*

--

-.071

-2.055

--

(.714)

(1.630)

--

(.813)

(1.939)

--

Treat*Yrs

--

.870

--

--

.859

--

--

(.661)

--

--

(.780)

--

YrsInPrg.

--

-.610

-.611

--

-.5466

-.548

--

(.376)

(.377)

--

(.455)

(.455)

TreatYR1

--

--

-2.365*

--

--

-1.073

--

--

(1.244)

--

--

(1.473)

TreatYR2

--

--

-2.715**

--

--

-.601

--

--

(1.003)

--

--

(1.120)

TreatYR3

--

--

-.982

--

--

.891

--

--

(1.309)

--

--

(1.320)

TreatYR4

--

--

.213

--

--

1.082

--

--

(1.617)

--

--

(2.049)

R2

.414

.414

.415

.462

.462

.463

MSE

12.063

12.060

12.065

13.720

13.719

13.729

Reading: N=1320 Math: N=1311

 

 


 

TABLE 3. HECKMAN RESULTS, 1991-1994, CHOICE (Treatment) and MPS.(19)

 

SRDB ONLY

READING MATH

(1)

(2)

(3)

(4)

(5)

(6)

Treatment

-1.061*

-.945

--

.446

-1.141

-

(.510)

(1.247)

--

(.565)

(1.377)

--

Treat*Yrs

--

-.088

--

--

.697

--

--

(.514)

--

--

(.567)

--

YrsInPrg.

--

-.400

-.398

--

-.355

-.357

--

(.215)

(..215

--

(.237)

(.237)

TreatYR1

--

--

-.908

--

--

-.534

--

--

(.922)

--

--

(1.018)

TreatYR2

--

--

-1.415

--

--

.128

--

--

(.778)

--

--

(.857)

TreatYR3

--

--

-.773

--

--

1.626

--

--

(.919)

--

--

(.1.012)

TreatYR4

--

--

-1.678

--

--

.637

--

--

(1.463)

--

--

(1.621)

Chi-Sq.

198.6

200.96

203.09

184.2

186.39

188.24

Lambda

-8.509

-8.505

-8.513

8.094

8.108

8.100

Reading: N=4116 Math: N=4132

 

FULL VARIABLE SET READING MATH

(1)

(2)

(3)

(4)

(5)

(6)

Treatment

-1.343

-3.151

--

-.325

-2.495

--

(.876)

(1.814)

--

(1.190)

(2.127)

--

Treat*Yrs

--

.745

--

--

.920

--

--

(.692)

--

--

(.794)

--

YrsInPrg.

--

-.591

-.590

--

-.559

-.562

--

(.394)

(.394)

--

(.453)

(.453)

TreatYR1

--

--

-1.938

--

--

-1.450

--

--

(1.373)

--

--

(1.658)

TreatYR2

--

--

-2.179

--

--

-.902

--

--

(1.160)

--

--

(1.468)

TreatYR3

--

--

-.1.020

--

--

..695

--

--

(1.297)

--

--

(1.629)

TreatYR4

--

--

.575

--

--

.754

--

--

(1,919)

--

--

(2.310)

Chi Sq.

646.52

648.32

650.34

588.11

590.17

592.20

Lambda

2.022

1.767

1.767

-1.076

-1.286

-1.319

Reading: N=1566 Math: N=1570

 


 

TABLE 4. REGRESSION RESULTS, 1991-1994, CHOICE (Treatment) and REJECTS.(20)

SRDB ONLY READING MATH

(1)

(2)

(3)

(4)

(5)

(6)

Treatment

-.441

.349

--

1.979*

.349

--

(.894)

(2.254)

--

(1.002)

(2.254)

--

Treat*Yrs

--

.834

--

--

.834

--

--

(1.054)

--

--

(1.054)

--

YrsInPrg.

--

-.6634

-.334

--

-.634

-.610

--

(.947)

(.834)

--

(.947)

(.948)

TreatYR1

--

--

-.090

--

--

.911

--

--

(1.425)

--

--

(1.476)

TreatYR2

--

--

-.688

--

--

2.066

--

--

(1.014)

--

--

(1.188)

TreatYR3

--

--

.051

--

--

3.672*

--

--

(1.445)

--

--

(1.653)

TreatYR4

--

--

-1.078

--

--

2.052

--

--

(2.174)

--

--

(2.695)

R2

.338

.339

.340

.423

.423

.424

MSE

12.541

12.545

12.552

13.442

13.451

13.45

Reading: N=1158 Math: N=1106

 

FULL VARIABLE SET READING MATH

(1)

(2)

(3)

(4)

(5)

(6)

Treatment

.217

-2.592

--

4.701**

-1.740

--

(1.497)

(3.253)

--

(1.519)

(3.706)

--

Treat*Yrs

--

1.433

--

--

3.256*

--

--

(1.498)

--

--

(1.588)

--

YrsInPrg.

--

-1.158

-1.158

--

-2.907*

-2.898*

--

(1.378)

(1.379)

--

(1.470)

(1.473)

TreatYR1

--

--

-.644

--

--

1.658

--

--

(2.135)

--

--

(2.464)

TreatYR2

--

--

-.403

--

--

4.455**

--

--

(1.663)

--

--

(1.706)

TreatYR3

--

--

1.965

--

--

8.461***

--

--

(2.385)

--

--

(2.218)

TreatYR4

--

--

3.536

--

--

10.864**

--

--

(3.575)

--

--

(3.633)

R2

.320

.321

.322

.462

.466

.466

MSE

12.432

12.444

12.455

13.307

13.284

13.304

Reading: N=608 Math: N=590

 


 

TABLE 5. LOGISTIC REGRESSION ON REJECTS HAVING ANY POST-APPLICATION TEST, 1991-94*

Variables

B

SE B

EXP(B)

LogInc$

-1.01

.51

.37

MotherEducation

-.05

.14

.95

Gender (1=fm)

.62

.37

1.86

Grade at application

.60

.09

1.82

African American

.24

.84

1.27

Hispanic American

-.31

1.00

.73

Parent Involvement Scale (High = more)

-.11

.13

.89

 

* Model Chi-sq. = 77.34, p< .000; 81.3% of the cases were correctly assigned; N = 192.

 

 


 

TABLE 6. REGRESSION RESULTS, 1991-1994, CHOICE (Treatment) and REJECT S - EXCLUDING LOWEST SCORING STUDENTS (NCE<5).(21)

 

SRDB ONLY READING MATH

(1)

(2)

(3)

(4)

(5)

(6)

Treatment

-1.009

-1.009

--

.890

.654

--

(.853)

(2.037)

--

(.932)

(2.153)

--

Treat*Yrs

--

.027

--

--

.111

--

--

( .886)

--

--

(.938)

--

YrsInPrg.

--

-.252

-.246

--

-.015

.002

--

(.784)

(..784

--

(.826)

(.828)

TreatYR1

--

--

-.748

--

--

.317

--

--

(1.367)

--

--

(1.436)

TreatYR2

--

--

-1.393

--

--

1.216

--

--

( .972)

--

--

(1.079)

TreatYR3

--

--

-.49

--

--

1.579

--

--

(1.356)

--

--

(1.394)

TreatYR4

--

--

-1.152

--

--

-.470

--

--

(1.981)

--

--

(2.363)

R2

.339

.340

.340

.405

.405

.406

MSE

11.751

11.759

11.765

13.442

12.526

12.525

Reading: N=1129 Math: N=1063

 

FULL VARIABLE SET READING MATH

(1)

(2)

(3)

(4)

(5)

(6)

Treatment

-.410

-3.423

--

2.668

-1.687

--

(1.429)

(3.147)

--

(1.446)

(3.748)

--

Treat*Yrs

--

1.513

--

--

2.240

--

--

(1.464)

--

--

(1.592)

--

YrsInPrg.

--

-.976

-.971

--

-2.160

-2.149

--

(1.346)

(1.347)

--

(1.478)

(1.482)

TreatYR1

--

--

-1.548

--

--

.376

--

--

(2.054)

--

--

(2.441)

TreatYR2

--

--

-.960

--

--

2.905

--

--

(1.584)

--

--

(1.563)

TreatYR3

--

--

1.471

--

--

5.294*

--

--

(2.318)

--

--

(2.069)

TreatYR4

--

--

2.672

--

--

6.635

--

--

(3.472)

--

--

(3.545)

R2

.319

.321

.321

.432

.434

.434

MSE

11.807

11.813

11.827

12.441

12.442

12.462

Reading: N=597 Math: N=590

 

 


 

TABLE 7. REGRESSION RESULTS, 1991-1994, REJECTS (Treatment) and MPS.(22)

SRDB ONLY READING MATH

(1)

(2)

(3)

(4)

(5)

(6)

Treatment

-.330

.352

--

-2.254*

-.687

--

(.801)

(1.920)

--

(.942)

(2.016)

--

Treat*Yrs

--

-.417

--

--

-.894

--

--

(.839)

--

--

(.976)

--

YrsInPrg.

--

-.334

-.334

--

-.285

-.284

--

(.197)

(.197)

--

(.223)

(.223)

TreatYR1

--

--

.234

--

--

-2.906*

--

--

(1.383)

--

--

(1.379)

TreatYR2

--

--

-.627

--

--

-.136

--

--

(1.387)

--

--

(1.609)

TreatYR3

--

--

-2.230

--

--

-2.546

--

--

(1.710)

--

--

(2.094)

TreatYR4

--

--

1.124

--

--

-8.153*

--

--

(2.478)

--

--

(3.368)

R2

.409

.409

.410

.452

.453

.454

MSE

12.586

12.584

12.585

13.768

13.766

13.759

Reading: N=3425 Math: N=3331

 

FULL VARIABLE SET READING MATH

(1)

(2)

(3)

(4)

(5)

(6)

Treatment

-1.686

-1.809

--

-4.884**

.249

--

(1.354)

(2.870)

--

(1.543)

(3.291)

--

Treat*Yrs

--

-.061

--

--

-2.696*

--

--

(1.238)

--

--

(1.419)

--

YrsInPrg.

--

-.647

-.647

--

-.491

-.492

--

(.380)

(.380)

--

(.462)

(.463)

TreatYR1

--

--

-2.262

--

--

-4.868*

--

--

(2.280)

--

--

(2.432)

TreatYR2

--

--

-1.624

--

--

-2.391

--

--

(2.463)

--

--

(2.361)

TreatYR3

--

--

-.989

--

--

-2.890

--

--

(3.168)

--

--

(3.424)

TreatYR4

--

--

-3.832

--

--

-17.380***

--

--

(2.822)

--

--

(3.547)

R2

.446

.447

.448

.508

.511

.514

MSE

12.201

12.195

12.206

13.959

13.929

13.893

Reading: N=920 Math: N=899

 

 


 

Go to Figure 1. 1990 Cohorts – Math NCEs

 

 


 

APPENDIX TABLE A. VARIABLE MEANS AND STANDARD DEVIATIONS Choice-MPS Choice-Reject Reject-MPS.(23)

SRDB Full Vars. SRDB Full Vars. SRDB Full Vars.

Mncepo

41.478

43.087

40.318

41.320

41.305

42.659

(18.204)

(18.610)

(17.638)

(17.973)

(18.580)

19.759)

Rncepo

38.461

40.664

37.519

38.613

38.447

41.060

(15.976)

(15.675)

(15.371)

(14.942)

(.082)

(16.279)

Mnce

41.023

42.633

39.851

40.809

40.95

42.827

(18.191)

(18.878)

(18.202)

(18.768)

(18.176)

(19.053)

Rnce

38.672

40.735

38.208

39.607

38.485

40.888

(16.032)

(16.193)

(16.161)

(16.081)

(16.095)

(16.078)

Treat

.220

.384

.788

.854

.071

.102

(.414)

(.487)

(.409)

(.353)

(.256)

(.303)

Treatxyr

.475

.841

1.704

1.868

.134

.205

(1.001)

(1.216)

(1.224)

(1.170)

(.256)

(.686)

Treatyr1

.063

.105

.226

.234

.030

.039

(.243)

(.307)

(.418)

(.424)

(.171)

(.194)

Treatyr2

.079

.141

.283

.314

.024

.037

(.270)

(.348)

(.451)

(.464)

(.152)

(.188)

Treatyr3

.056

.099

.202

.220

.011

.014

(.230)

(.299)

(.401)

(.415)

(.102)

(.119)

Treatyr4

.021

.039

.077

.086

.006

.012

(.145)

(.193)

(.266)

(.281)

(.079)

(.110)

Yrsinprg

2.350

2.340

2.108

2.154

2.367

2.392

(1.053)

(1.043)

(.960)

(.957)

(1.074)

(1.087)

Raceaa

.727

.719

.776

.803

.720

.673

(.446)

(.450)

(.417)

(2.014)

(.449)

(.469)

Racehisp

.130

.130

.193

.153

.114

.130

(.336)

(.336)

(.395)

(.360)

(.318)

(.337)

Raceoth

.007

.006

.000

.000

.008

.009

(.081)

(.078)

(.000)

(.000)

(.088)

(.094)

Gender

.548

.564

.556

.580

.535

.553

(.498)

(.496)

(.497)

(.494)

(.499)

(.497)

Grade

4.640

4.620

4.330

4.295

4.668

4.719

(2.005)

(2.005)

(2.025)

(2.014)

(2.011)

(1.999)

Loginc$

--

9.037

--

9.060

--

8.976

(.865)

(.818)

(.920)

Edumom

--

3.762

--

4.095

--

3.516

(1.369)

(1.216)

(1.395)

Married

--

.291

--

.256

--

.304

(.454)

(.437)

(.460)

Eduexpc

--

3.927

--

4.095

--

3.828

(1.037)

(1.216)

(1.170)

 

 


 

APPENDIX TABLE B. REGRESSION RESULTS, 1991-1994, CHOICE (Treatment) and MPS -- FREE LUNCH STUDENTS INCLUDED.(24)

 

READING MATH

(1)

(2)

(3)

(4)

(5)

(6)

Mnce

.170***

.170***

.169***

.530***

.530***

.530***

(.020)

(.02)

(.020)

(.025)

(.025)

(.025)

Rnce

.444***

.445***

.446***

.163***

.165***

.165***

(.025)

(.025)

(.024)

(.026)

(.027)

(.027)

Treat

-1.732**

-3.294*

--

.0823

-1.744

--

(.703)

(1.566)

(.795)

(1.871)

Treatxyr

--

.677

--

--

.789

--

(.633)

(.749)

Treatyr1

--

--

-2.065

--

--

-.861

(1.219)

(1.447)

Treatyr2

--

--

-2.626**

--

--

-.398

(.994)

(1.106)

Treatyr3

--

--

-1.110

--

--

1.001

(1.303)

(1.309)

Treatyr4

--

--

-.023

--

--

1.067

(1.579)

(1.995)

Yrsinprg

--

-.406

-.406

--

-.531

-.533

(.326)

(.326)

(.395)

(.396)

Free Lunch

-2.60*

-2.173*

-2.172*

-.343

-.240

-.236

(1.065)

(1.059

(1.060)

(1.207)

(1.206)

(1.207)

Raceaa

-3.873***

-3.928***

-3.932***

-5.633***

-5.695***

-5.697***

(.980)

(.981)

(.982)

(1.057)

(1.053)

(1.054)

Racehisp

-1.148

-1.118

-1.090

-2.384

-2.337

-2.342

(1.149)

(1.148)

(1.149)

(1.282)

(1.283)

(1.283)

Raceoth

1.150

1.203

1.208

1.533

1.613

1.611

(2.623)

(2.558)

(2.559)

(4.130)

(4.073)

(4.076)

Gender

2.500***

2.483***

2.485***

1.209

1.184

1.185

(.627)

(.627)

(.627)

(.680)

(.680)

(.680)

Grade

-.113

-.087

-.090

-1.177***

-1.141***

-1.136***

(.156)

(.159)

(.159)

(.174)

(.176)

(.177)

Loginc$

.957*

.927*

.926*

1.141**

1.098**

1.097**

(.432)

(.430)

(.430)

(.450)

(.449)

(.449)

Edumom

.489*

.489*

.494*

.145

.143

.144

(.238)

(.238)

(.238)

(.282)

(.282)

(.282)

Married

-.111

-.125

-.130

.086

.069

.069

(.779)

(.778)

(.779)

(.830)

(.829)

(.829)

Eduexpc

.184

.190

.196

.506

.511

.509

(.327)

(.327)

(.328)

(.336)

(.336)

(.337)

Constant

9.175*

10.210*

10.200*

10.374*

11.758**

11.743**

(4.450)

(4.463)

(4.464)

(4.572)

(4.617)

(4.621)

R2

.500

.501

.501

.536

.537

.537

MSE

11.943

11.944

11.948

13.480

13.480

13.488

(N)

(1542)

(1542)

(1542)

(1531)

(1531)

(1531)

 

 

 


 

Appendix C Missing

 

Appendix D. – Survey Sample Sizes and Response Rates

 

Appendix E. – Models to Estimate "Total Math" from "Problem Solving"

 

Footnotes

 

 


 

REFERENCES

 

Beales, J. R. and Wahl, M. "Private Vouchers in Milwaukee: The PAVE Program." In T. Moe (ed.) Private Vouchers. Stanford: Hoover Institute: 41-73.

Friedman, M. (1955). "The Role of Government in Education." In R.A. Solo (ed.) Economics and the Public Interest. New Brunswick , NJ Rutgers University Press.

Gamoran, A. 1996. "Student Achievement in Public Magnet, Public Comprehensive, and Private High Schools. Educational Evaluation and Policy Analysis. Vol. 18 (Spring): 1-18.

Greene, J. P., Peterson, P. and Du, J. "The Effectiveness of School Choice in Milwaukee: A Secondary Analysis of Data from the Program’s Evaluation." Paper given at the American Political Science Association Annual Meeting. San Francisco, CA. August 29 to September 1.

Heckman, J. 1979. "Sample Selection Bias as a Specification Error. Econometrica 47:153-161.

Hirschman, A.O. 1970. Exit, Voice, or Loyalty. Cambridge: Harvard University Press.

Hoxby, C. 1996. "Evidence on Private School Vouchers: Effects on School and Students. " In H. Ladd (ed.), Holding Schools Accountable (Washington, D.C.: The Brookings Institution).

Huber, P. J. 1967. "The Behavior of Maximum Likelihood Estimates Under Non-Standard Conditions." Proceedings of the fifth Berkeley Symposium on Mathematical Statistics and Probability 1: 221-233.

Kane, T. J. and Rouse, C. E. 1993. "Labor Market Returns to Two- and Four-Year College: Is A Credit a Credit and Do Degrees Matter?" Cambridge: National Bureau of Education Research Working Paper.

Levin, H. 1987. "Education as a Private and Public Good." Journal of Policy Analysis and Management. Vol. 6, No. 4: 628-41.

______ 1990. "The Theory of Choice Applied to Education." In W. H. Clune and J. F. Witte (Eds.). Choice and Control in American Education, Vol. I: The Theory of Choice and Control in Education (pp 247-84). New York: The Falmer Press.

Murnane, R. 1986. "Comparisons of Private and Public Schools: The Critical Role of Regulations." In D. Levy (ed.). Private Education Studies in Choice and Public Policy. New York: Oxford University Press.

White, H. 1980. "A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity." Econometrica 48: 817-830.

Wisconsin Legislative Audit Bureau. 1995. "An Evaluation of the Milwaukee Parental Choice Program." Madison: Wisconsin.

Witte, J. F. (1992). "Public Subsidies for Private Schools: What We Know and How to Proceed." Education Policy, 6 (June): 206-27.

Witte, J., Thorn, C., Pritchard, K., and Claibourn, M. 1994. "Fourth Year Report: Milwaukee Parental Choice Program." Report to the Wisconsin State Legislature.

Witte, J., Thorn, C, Sterr, T. 1995. "Fifth Year Report: Milwaukee Parental Choice Program." Report to the Wisconsin State Legislature.

Witte, J. 1996a. "Politics, Markets, or Money? The Political-Economy of School Choice." Paper given at the American Political Science Association Annual Meeting. San Francisco, CA. August 29 to September 1.

Witte, J. 1996b. "School Choice and Student Performance," in Helen Ladd (ed.), Holding Schools Accountable (Washington, D.C.: The Brookings Institution), 149-176.