Sei sulla pagina 1di 19

Bennett 1

Mark Bennett Professor Tien ECON 300 02 17 December 2011 Examining Potential Preferences Within High School Advanced Placement Testing I. Introduction The college admissions process has become an increasingly competitive and strategic event for high school seniors around the country in the last few decades. With acceptance rates at top universities throughout the country dropping each year, high school students have become more and more concerned with finding ways to demonstrate their intellect and college readiness in a manner that sets them apart from other applicants. One such method of standing out from the crowd that has been adopted by most high schools around the nation is the Advanced Placement program. Operated by the non-profit organization the College Board since 1955, the Advanced Placement (AP) program offers college-level courses and exams that enable students to earn college credit and advanced placement, stand out in the admission process, and learn from some of the most skilled, dedicated, and inspiring teachers in the world (College Board, 2011). The AP program is designed to engage students in courses that are fast-paced and more demanding than other high school courses, providing students with an opportunity to experience an academic environment that acts as a preview of a college classroom setting. The AP program offers 34 different courses, all of which have a set curriculum that is standardized across the country. At the end of each school year, the College Board offers AP exams in each of the 34 courses for all those who wish to demonstrate their understanding of the material covered throughout the year. The tests are not required for students who have completed AP courses, and

Bennett 2

similarly a student may opt to take an AP test without having taken the corresponding AP course. Once the tests are completed, they are sent to a panel of AP teachers, who then grade the tests on a scale of one to five, with a score of five signifying that a student is extremely well qualified and a score of one meaning no recommendation (College Board, 2011). Scores of three or higher are generally considered passing. Originally, the AP program was intended for only the top tier of students; an opportunity for those performing at the highest academic level to continue their pursuits in a variety of disciplines in a more challenging and stimulating environment. However, as more students became increasingly worried about how they would fill their college resumes, demand for AP courses has increased, and the program has grown tremendously. Today, students hoping to gain admission to highly selective colleges are all but required to take AP courses in high school if given the chance, since a failure to do so would indicate a lack dedication and interest in ones schoolwork. Because the AP program and AP testing in particular has become such a widely adopted indicator of student ability, many have raised questions as to whether AP tests inadvertently give preference to certain types of students, or students from certain types of schools. The College Board administers several other tests designed for high school students, the most prominent of which is the SAT. The SAT has traditionally been the most widely used standardized indicator of student ability by United States colleges, despite the fact that one of the early developers of the SAT openly admitted that the test was highly skewed in favor of cultured and affluent children (Sacks, 2007). Thus, it seems there is a reasonable possibility that AP testing might give inadvertent advantages to certain types of students. It is the purpose of this paper to explore the relationship between passing AP tests rates and macro-level AP test-taker characteristics as a means to determine the key factors that

Bennett 3

correlate to results on the AP exams, and whether those factors suggest the presence of inherent preferences within the AP program. I use data from the state of Texas over a period of ten years with an ordinary-least squares (OLS) multiple regression model in order to determine which characteristics display a significant correlation with the changes in passing AP test rates over time. In considering which state-level characteristics might be significantly related to AP test performance, it would first be helpful to explore what others have viewed as important indicators of test performance. II. Literature Review There has been a great deal of research conducted in an attempt to illuminate inequalities within the AP testing model, much of which is focused around determining which types of students might have an advantage in scoring highly on AP tests. However, some claim that there are state-level preferences in place long before the tests are handed out, even before students enter the classroom at the beginning of the year. One such preference has to do with student access to AP courses and AP testing. In their research paper regarding the connection between AP experience and college success, Kristin Klopfenstein and M. Kathleen Thomas (2009) discuss the fact that the principal current use of AP testing as a measuring stick for high school aptitude and college readiness was not the intended function of the program, and is a very modern phenomenon. The original design of the specialized, accelerated program was specifically intended to be for select groups of students, and as such it was not intended that there be equal, widespread access to the program. Because of this, even as the program has grown rapidly in the past twenty years, there continues to be high percentages of students who do not have access to AP courses and exams. The rapid expansion and growth of the program in recent years represents a positive sign for students who

Bennett 4

are still waiting for access to AP courses, but among the population of students who are already AP test-takers, this growth can logically be seen as the source of another disadvantage for many students. After only being available in 21% of Texas high schools in 1994, AP courses grew to be offered in 53% of Texas high schools by 2000 (Klopfenstein, 2004). If we make the assumption (most likely a false one, but still meaningful for this exercise) that this was straight-line growth, that means each year between 1994 and 2000, 5.3% of high schools in Texas were offering AP courses for the first time. This typically means that students within that 5.3% of schools are taking AP courses for the first time, and most AP teachers from these schools will be teaching the courses for the first time. The inexperience of both the students and the teachers can logically be seen as a disadvantage brought on by the macro-level expansion of the AP program. Another hypothesized characteristic that could potentially be correlated to AP test pass rates is gender. In 1993, John Mazzeo, Alicia P. Schmitt, and Carole A. Bleistein published a paper outlining various studies and experiments relating gender to AP test performance. In examining the differences between male and female tests scores across a variety of AP subjects, the authors note a consistent trend that the difference in female to male scores on constructedanswer sections of the tests is much greater than that on multiple-choice sections. Put another way, females perform better compared to males on the constructed answer sections than they do on the multiple choice answer sections. This inequality in performance based on gender suggests that, depending on the construction of the individual AP tests, gender could offer either an advantage of a disadvantage in achieving a passing test score. Possibly the most widely hypothesized characteristic believed to be associated with preference in AP testing is race, and the relationship between race and class. Minority students

Bennett 5

throughout the country underperform on AP exams compared to non-minority students (Klopfenstein, 2001). Obviously, this underperformance doesnt include the numerous populations of minority students who were unable to gain access to AP courses and exams due to inequalities in the educational system, but since this paper is focused on test scores, I will concentrate on the population of minority students who were able to take AP tests. Black and Hispanic students are three times more likely to be low income than white students writes Klopfenstein (2001). This means that minority students are more likely to have parents who are not highly educated, and are more likely to attend schools that are not very wealthy. Thus, minority students have few resources upon which to draw in terms of parental support, since their parents often havent had an educational experience comparable to AP classes and exams (Klopfenstein, 2001). Similarly, poorer schools often have fewer resources and less experienced teachers, and since schools that serve low-income minority communities tend to have lower AP enrollment, there is less emphasis put on AP courses compared to other high schools (Solrzano and Ornelas, 2009). Thus, past studies have given evidence that there should exist a correlation between race and AP test pass rate. III. Methodology For the purposes of this paper, I have collected data from the state of Texas via the Texas Education Agency, which publishes summarized AP examination results each year based on data provided by the College Board. Other data not pertaining specifically to the AP program has been collected from the Financial Allocation Study for Texas (FAST) website. For data pertaining to individual characteristics of examinees, the entire population of AP test-takers in Texas high schools has been considered. Other variables are based on data on the level of individual schools, districts, or the state as a whole, and these distinctions should be evident by

Bennett 6

the way in which each variable is defined. The sampled data comes from ten consecutive years of AP test results, pertaining to the 1990-1991 academic school year up until the 2008-2009 school year. Any state-level data used in the model that is collected per calendar year has been matched to the academic year that begins in the corresponding calendar year (i.e., data representing the 2000 calendar year has been matched with data from the 2000-2001 academic year). I have constructed a multiple regression model for the AP test pass rates of 11th and 12th grade high school students. While there was ample data corresponding to the test scores of 9th and 10th grade students as well, I believed that including this data in the model would add unnecessary additional variability to the data. Since students enter high school from a wide variety of primary-school experiences, I imagine the gap in performance between students who have come from more challenging schools compared to those who come from a less demanding environment tends to be much greater in the first two years of high school. It seemed logical that by the time students reach their third and fourth year, they have been exposed to the same academic environment for at least two years, and thus the disparities caused by the students previous academic history might be less prevalent. In addition, 9th and 10th grade AP students account for only a small percent of all examinees. As such, all student characteristic data in the model is representative of 11th and 12th grade students only. The dependent variable in my model is AP test pass rate, which is defined by the number of passing scores (three and above) achieved by 11th and 12th grade students on AP exams in a given year divided by the total number of AP exams taken by 11th and 12th grade students in that year. The following OLS regression model was used for AP exam pass rate (PASS): [1] PASSt = 0 + 1MINt + 2FEMt + 3APIPt + 4NOCt + 5PESt + 6TEXPt + 7DISTt

Bennett 7

+ 8WEALTHt + 0 acts as a constant, represents standard error, and the subscript t signifies that each set of data was collected in the same year t. To address the questions that both Klopfenstein (2001) and Solrzano and Ornelas (2009) raised regarding the relation of race and AP testing, the first variable included in the model is a measure of the participation of minority students in the AP program in Texas. MIN is defined as the percent of AP examinees that were black, Hispanic, or Native-American. Thus, if there are in fact preferences in AP testing that adversely affect minority students, we would expect to see a negative correlation between exam pass rates and the percentage of minority examinees, meaning we would expect 1 to be negative. The variable FEM is equal to the percent of female AP examinees in a given year. While it seemed unlikely that the proportion of female test-takers would change significantly over the course of a ten-year period, I felt if there was any validity to the claim that females consistently outperform males on certain sections of AP tests, then some of the variability in the pass rate data might be explained by gender differences. Therefore, we would expect 2 to be positive. Some Texas high schools that offer AP courses also offer incentive programs to encourage students and teachers to perform at their best within the AP program. One such program is appropriately named the Advanced Placement Incentive Program (APIP) and has been implemented in certain Texas high schools since 1996. The program offers cash incentives to students for each AP exam on which they receive a passing score, and also offers monetary bonuses for teachers that are correlated to the number of their students that pass the exams. While the APIP program isnt nearly prevalent enough among Texas high schools to have a drastic effect on state-wide AP exam pass rates, research conducted by Professor C. Kirabo Jackson of Northwestern University suggests that one of the results of the APIP program has

Bennett 8

been a shift of the educational culture in many Texas high schools (Jackson, 2008). As such, I hypothesized that as the APIP program expands to more high schools throughout the state, this cultural shift related to the AP program might spread to non-APIP schools as well. Thus, I have included the variable APIP, which represents the number of APIP schools in Texas in a given year. If my hypothesis is correct, we would expect to see a positive value for 3. The variable NOC is defined as the percent of AP examinations that were taken by a student who has not first completed the corresponding AP course in a Texas high school. Since presumably students who attended a year-long course designed to teach them the materials covered on a particular AP test would have an advantage over those who did not, it seems logical that having a higher number of examinees who have not taken the AP course first would lead to a lower exam pass rate. This implies that we would expect 4 to be negative. The variables PES and WEALTH are both intended to account for monetary resources that might play a part in the success of AP programs in Texas high schools. PES is the amount of annual public education spending in Texas, data that was collected on the FAST website (Financial Allocation Study for Texas, 2011). WEALTH is the annual median district property wealth of the school districts that offer AP courses and exams. A significantly positive coefficient for these two variables would signify that both the affluence of a school district and public education spending are positively correlated with AP test pass rates. The variable TEXP is the average teacher experience (in years) within AP schools in Texas. It seems to be a logical hypothesis that having more experienced teachers would result in higher AP pass rates, thus we expect the coefficient to be positive. Finally, DIST represents the percent of all school districts in Texas that offer AP courses and exams. One of the presumptions I made during my research was that schools that have been offering AP courses for many years

Bennett 9

would seemingly have an advantage over schools that are in their first or second year of offering AP courses, due to the fact that students would be more used to the structure of AP courses, and teachers would be more familiar with the curriculum. Therefore, I wanted to include a variable that attempted to account for this disparity. My assumption is that if DIST stays constant or experiences small changes then it likely will not have much effect of the model. However, if there is a big jump in the number of new districts offering AP courses in one year, this represents many schools offering AP courses for the first time, and thus may cause a slight dip in the AP exam pass rate compared to what is expected if all other variables were constant. As such, I predict that should a significant coefficient value exist for this variable, it would be negative. All of the data from the above variables was collected from the annual Advanced Placement and International Baccalaureate Examination Results in Texas reports hosted on the Texas Education Agency website, unless otherwise noted. Some of the data was taken directly from the figures in these reports, while other data was acquired through simple calculations from the raw data that was presented. One drawback of the model I have presented is that there is no student-specific variable to control for class or economic status. Ideally, I would have been able to include a variable ECON, defined by the percentage of AP examinees that are economically disadvantaged, or even one variable for the percentage of lower-class examinees and another for the percentage of middle-class examinees; however the data needed in order to include this variable was only available for the 2009-2010 academic year. In addition, an admitted flaw in the DIST variable construction is that the total number of school districts in Texas changes each year. I am not sure whether this is due to redistricting, expansion, or schools closing down, but there were only two back-to-back years within my ten year sample period during which the number of total school

Bennett 10

districts (AP and non-AP) was the same. Thus, the percentage of AP schools is based on a fluctuating (albeit slightly) total number of districts. IV. Results The regression results from model [1] can be found in Figure 1 below. According to the Adjusted R-squared value, this model explains roughly 99.8% of the variation in the AP exam pass rate data. In addition, the F-statistic for the model is incredibly high (approximately 488.7), signifying that this is a useful model in terms of global utility as well. However, if we perform a hypothesis test for each variable individually, with the null hypothesis being that the variables coefficient is equal to zero, we can only reject our null hypothesis with 95% confidence for the DIST variable. The coefficient for DIST tells us that for every 1% increase in the number of school districts in Texas offering AP courses and exams, there is approximately a 1.90% decrease in the AP test pass rate. While most of the variables in the model arent statistically significant at a 95% confidence level, many of the p-values are relatively close to being significant. Because of this, I decided to experiment with a second model of the following form: [2] PASSt = 0 + 1MINt + 2FEMt + 3APIP + 4PESt + 5TEXPt + 6DISTt + 7WEALTHt + Model [2] leaves out the variable for percentage of exams taken by students who had not taken the corresponding AP course (NOC), since it did not appear to be significantly affecting AP test pass rates (p-value equal to .6391). As Figure 2 shows, the regression results from model [2] are much improved; the R-squared value has slightly increased, and the F-statistic value has grown by approximately 306.1, both indicators that model [2] is an even better fit for the AP exam pass rate data. All variables included in model [2] are significant at a 95% confidence level. Notice

Bennett 11

that the coefficient for the variable DIST, the only statistically significant variable from model [1], has changed only slightly and has become more significant (lower p-value). The results from the model [2] regression give us very interesting information regarding the AP exam pass rates, not the least of which is related to the first variable, MIN. According to the coefficient, for every one percent increase in the proportion of minority examinees to total examinees, there is approximately a 1.73% increase in the AP exam pass rate. This is odd not only because it disagrees with the findings in the research done by Klopfenstein (2001) and Solrzano and Ornelas (2009), but also because it seemingly contradicts the trend of the raw data. Graph 1 below shows a scatter plot of the correlation derived from the raw data of AP exam pass rate (PASS) and the percentage of examinees who are black, Hispanic, or Native American (MIN). There appears to be a clearly negative relationship between the two variables, leading me to believe that the coefficient for MIN in Figure 2 is the result of Type I error. Driven by this belief, I conducted a third regression as a means to isolate the MIN variable using the following simple regression model: [3] PASSt = 0 + 1MINt +

As predicted, the regression results in Figure 3 show a statistically significant negative coefficient for the MIN variable. The R-squared value and F-statistic are both significantly smaller values compared the results in Figure 2, but this is to be expected given the simplicity of model [3]. If we continue to interpret the coefficient values from our regression in Figure 3, we see that there is a positive correlation between AP exam pass rates and the percentage of female testtakers. Specifically, each one percent increase in the proportion of female test-takers should lead to a 7.27% increase in the AP exam pass rate. This may seem like too large of an increase, but

Bennett 12

consider the fact that in the ten-year period I used for my sample, the female examinee ratio only varied by less than .3%. The coefficient for APIP is negative, indicating that I was incorrect in my hypothesis that the expansion of the APIP program would positively affect the state-wide AP exam pass rate. One reason for this might be that the added incentive to take AP courses could lead to inflated enrollment and larger class sizes, which can often have a negative effect on academic goals. In addition, the APIP might encourage students to take courses that they arent prepared or qualified for, leading to an increase in low AP exam scores. Not surprisingly, both teacher experience (TEXP) and public education spending (PES) appear to be positively correlated with AP exam pass rates. Teaching experience shows a particularly strong correlation, with a one year increase in average teacher experience leading to a 7.85% increase in AP exam pass rates. However, the coefficient of our final variable, median district property wealth (WEALTH), is somewhat surprising. The negative coefficient suggests that as property values increase, the AP exam pass rate decreases. Since property values within a school district tend to be fairly good indicators of the quality of the schools in that district, one would assume that this correlation should be positive. One explanation of this outcome could be that I did not control for inflation in the data for median district property value. This would mean that while the median district property value was rising over the ten year period in nominal terms, it could be that once the prices are adjusted for inflation, this trend would change. I should also point out that although the WEALTH coefficient is negative, the value is very close to zero. V. Conclusion There is an inherent danger in attempting to use this study as evidence for definitive conclusions regarding AP testing and its relation to the variables discussed above. Even a theoretical model that shows significant coefficients with 100% confidence has limited

Bennett 13

usefulness when drawing broad conclusions due to the very nature of statistics. These models only show correlation. In other words, even though two variables might display an incredibly strong correlation over a long period of time, there is still no guarantee that they share a causal relationship. The OLS regression models give us no information as to whether one variable is changing as a result of the behavior of another. Thus, I approach the task of drawing conclusions based on this data with appropriate caution. The model [2] regression output gives reason to believe that there is significant correlation between Advanced Placement exam pass rates and many state-level characteristics and figures in Texas. The percentage of female test-takers, the annual public education spending in Texas, and the average years of teacher experience all displayed a positive correlation to AP exam pass rates. Alternatively, the number of APIP schools, the percent of school districts offering AP courses, and the average property value of the AP school districts all showed a negative correlation to AP exam pass rates. The percent of examinees who had not taken an AP class before taking the AP exam proved to have no significant contribution to our model, and thus no clear relation to AP exam pass rates. Finally, the percentage of minority examinees displayed both positive and negative coefficients in different regressions, although the raw data seemed to suggest that a negative correlation is more accurate. These variables show significant correlation to our dependent variable, but is there also a causal relationship between them? I think the best we can say is that its possible. However, there are innumerable other factors that are undoubtedly relevant in examining the cause of changes in AP test pass rates over time, and also many flaws in the design of my study. For example, all of my data comes from state-level characteristics. While this approach gives good insight to broad, macro-level trends, I think some of the intricacies of the relationship between AP performance and other district- or

Bennett 14

school-specific data are left out. For example, I think if I were able to access school-specific data, I would have created a study that breaks Texas high schools into groups based on the diversity of the student population, the wealth of the school, and other similar characteristics. This way, I would be able to control for variables such as class, race, and school experience, and likely see a more drastic result in terms of how those factors directly relate to AP exam pass rates. Another shortcoming in this study is that it only considers data over a period of ten years. In this case I was again constrained by the availability of data, but I think having a shorter timeline can make significant variables less relevant within the context of the model. Especially within a broad population, factors like an increase in educational spending or more budget allocation for specialized programs can often have a delayed effect, meaning the corresponding results wouldnt be seen until years later, when the next generation of students goes through the AP program. With the timeline of data limited to 10 years, some of these trends could possibly have been lost. With the ever increasing importance that Advanced Placement courses hold for high school students who have hopes of continuing their ventures in college, it seems that exploring possible factors that can lead to improving test scores is a topic that deserves continued attention. While it is difficult to be certain, it seems as though there are distinct relationships between characteristics like race, wealth, and gender and student performance on AP exams. In order to be certain that our validation of the Advanced Placement program as a means to evaluate student ability and achievement doesnt inherently give an advantage to certain types of students for nonacademic reasons, it is important to ensure that these are only correlations, rather than causations. Otherwise, our use of the AP program is not an accurate measure of academic merit, but rather an indicator of individual characteristics that should have little to do with college

Bennett 15

admissions.

Figure 1
Dependent Variable: PASS

Bennett 16
Method: Least Squares Date: 12/15/11 Time: 18:23 Sample: 1 10 Included observations: 10 Variable C MIN FEM APIP NOC PES TEXP DIST WEALTH R-squared Adjusted R-squared S.E. of regression Sum squared resid Log likelihood F-statistic Prob(F-statistic) Coefficient -383.2724 1.782065 7.278588 -0.137405 -0.040061 1.229079 8.012796 -1.898057 -0.000254 0.999744 0.997699 0.139994 0.019598 16.98510 488.6992 0.034972 Std. Error 45.38656 0.273534 0.711722 0.027872 0.062923 0.191826 0.926442 0.087296 3.34E-05 t-Statistic -8.444623 6.514970 10.22672 -4.929856 -0.636666 6.407265 8.649002 -21.74266 -7.624239 Prob. 0.0750 0.0970 0.0621 0.1274 0.6391 0.0986 0.0733 0.0293 0.0830 48.53000 2.918162 -1.597020 -1.324694 -1.895762 2.200615

Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion Hannan-Quinn criter. Durbin-Watson stat

Figure 2
Dependent Variable: PASS Method: Least Squares Date: 12/15/11 Time: 18:43 Sample: 1 10 Included observations: 10 Variable C MIN FEM APIP PES TEXP DIST WEALTH R-squared Adjusted R-squared S.E. of regression Sum squared resid Log likelihood F-statistic Prob(F-statistic) Coefficient -381.0971 1.732661 7.270121 -0.133220 1.221937 7.845652 -1.882537 -0.000249 0.999641 0.998383 0.117351 0.027542 15.28369 794.7598 0.001257 Std. Error 37.93757 0.219871 0.596501 0.022705 0.160524 0.744761 0.070266 2.72E-05 t-Statistic -10.04537 7.880343 12.18794 -5.867487 7.612184 10.53445 -26.79161 -9.174258 Prob. 0.0098 0.0157 0.0067 0.0278 0.0168 0.0089 0.0014 0.0117 48.53000 2.918162 -1.456738 -1.214670 -1.722286 2.490532

Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion Hannan-Quinn criter. Durbin-Watson stat

Graph 1

Bennett 17

56 54 52 MIN 50 48 46 44 44 46 48 PASS 50 52 54

Figure 3
Dependent Variable: PASS Method: Least Squares Date: 12/15/11 Time: 19:10 Sample: 1 10 Included observations: 10 Variable C MIN R-squared Adjusted R-squared S.E. of regression Sum squared resid Log likelihood F-statistic Prob(F-statistic) Coefficient 86.49719 -0.762494 0.763123 0.733514 1.506423 18.15447 -17.17104 25.77284 0.000957 Std. Error 7.493878 0.150195 t-Statistic 11.54238 -5.076695 Prob. 0.0000 0.0010 48.53000 2.918162 3.834209 3.894726 3.767822 1.994293

Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion Hannan-Quinn criter. Durbin-Watson stat

Works Cited

Bennett 18

Advanced Placement Strategies. Incentive Programs: Creating Partnerships for Academic Excellence. Advanced Placement Strategies, Inc. 2008 2011. <http://www.apstrategies.org/IncentivePrograms.aspx>. College Board. Choose AP. 2011. <www.collegeboard.com/student/testing/ap/about.html>. College Board. College Board Tests. 2011. <www.collegeboard.com/testing>. Combs, Susan. Public Education Spending in Texas. Financial Allocation Study for Texas. <fastexas.org/study/exec/spending.php>. Gamoran, Adam. The Stratification of High School Learning Opportunities. Sociology of Education. American Sociological Association. Volume 60, Number 3. July 1987. Gandara, Patricia C., Gary Orfield and Catherine L. Horn. Expanding Opportunity in Higher Education: Leveraging Promise. SUNY Series: Frontiers in Education. SUNY Press, 2006. Jackson, C. Kirabo. Cash for Test Scores: The Impact of the Texas Advanced Placement Incentive Program. Education Next. Fall 2008. Klopfenstein, Kristin. Advanced Placement: do minorities have equal opportunity?. Economics of Education Review. Volume 23, Issue 2. Fort Worth, TX. May 22, 2001.

Klopfenstein, Kristin. The Advanced Placement Expansion of the 1990s: How Did Traditionally Underserved Students Fare?. Education Policy Analysis Archives. Volume 12, Number 68. December 12, 2004. Klopfenstein, Kristin and M. Kathleen Thomas. The Link Between Advanced Placement Experience and Early College Success. Southern Economic Journal. Southern Economic Association. Volume 75, Issue 3. January 2009. Mazzeo, John, Alicia P. Schmitt and Carole A. Bleistein. Sex-Related Performance Differences

Bennett 19

on Constructed-Response and Multiple-Choice Sections of Advanced Placement Examinations. College Board Report. College Entrance Examination Board. New York, NY. 1993. Solrzano, Daniel G and Arminda Ornelas. A Critical Race Analysis of Advanced Placement Classes: A Case of Educational Inequality. Journal of Latinos and Education. Volume 1, Issue 4. 2002.

Sacks, Peter. Tearing Down the Gates: Confronting the Class Divide in American Education. University of California Press, 2007. Texas Education Agency. Advanced Placement (AP) and International Baccalaureate (IB) Reports, 1999-2000 through 2009-2010. <http://www.tea.state.tx.us/acctres/ac_ib_index.html>.

Potrebbero piacerti anche