Sei sulla pagina 1di 3

The Atlanta Journal Constitution has run a series of interesting news and opinions pieces in the last week

centering around a recent release of the National Council on Teacher Quality (NCTQ). This approximately 100 page document (which can be found at http://www.nctq.org/dmsStage/Teacher_Prep_Review_2013_Report ) summarizes the findings of a multi-year study of over 1000 institutions which collectively prepare over 99 percent of the nation's traditionally trained new teachers. The institutions were evaluated (depending upon the program) on up to 18 different criteria, including student selection, content preparation, professional skills, and outcomes/post-graduate monitoring. The main objective of the NCTQ report, which I found rather interesting in its novelty, was to evaluate education programs like we would expect to see consumer products ranked. At times it seemed like NCTQ was doing this for future teachers, informing them which schools would best prepare them for their future careers; at other times it seemed like NCTQ was targeting parents and school districts. Their explanation: "There have been many attempts over the years to address weaknesses in teacher preparationNone actively sought to engage the power of the marketplace as the engine for change. Without pressure from the consumer, there was no pressure on institutions to conduct themselves differently, if for no other reason than to remain viable. It's time for a different tactic." (pg. 57) Not only was there a more traditional 4-star-max system, but there were warning signs given to programs which could not even earn a single star. The NCTQ explains how these warning signs should be considered: "Consumer alerts: of the 1200 elementary and secondary programs for which we are able to provide an overall rating, about one in seven earns less than one star. The universal 'warning' symbol is used to alert consumers and school districts to their low rating in our evaluation.A program's low rating does not suggest that many of its graduates don't go on to become capable teachers. What the low rating does suggest is that program isn't adding sufficient value, so that someone who wants to become a teacher would be better off investing time and tuition dollars elsewhere." (pg. 14) The thing that completely shocked me about the overall report, however, was the huge discrepancy between secondary and elementary education programs. Here are some interesting "facts": Despite evaluating 1200 programs, only FOUR received the highest ranking of four stars. None of those programs was elementary education, but rather an even split between undergrad secondary education and graduate secondary education. 105 programs earned 3 or more stars. Exactly 21 of those (20%) were in

elementary education; of those 21, only 2 were graduate programs. In my great state of Georgia, here are the results for ALL the evaluated elementary education programs: Albany State (undergrad)--warning sign Armstrong Atlantic (undergrad)--warning sign Augusta State (undergrad)--1 star Brenau (undergrad)--1 star Columbus State (undergrad)--warning sign Dalton State (undergrad)--1 star Gainesville State (undergrad)--1 star Georgia College and State University (undergrad)--2 stars Georgia Southern (undergrad)--1 star Georgia State (undergrad)--1 star Gordon State (undergrad)--1 star Macon State (undergrad)--1 star Mercer (graduate)--1 star North Georgia (undergrad)--1 star Piedmont College (undergrad)--1 star University of Georgia (undergrad)--2 stars University of Georgia (graduate)--1.5 stars University of West Georgia (undergrad)--warning sign Valdosta State (undergrad)-1.5 stars

[To be fair, the state of Georgia had 22 secondary programs evaluated, the lowest of which scored 1.5 stars--thus tying or beating ALL BUT TWO elementary programs.] Now I am a mathematician. Unlike many mathematicians, I have taken multiple education courses, and have even taught a content course for elementary

education majors; however, I do not pretend to know the ins and outs of education programs. But, on the surface, this seems absolutely mind-blowing to me. One would expect that--regardless of concentration in secondary or elementary-education majors should all have to take pedagogy, curriculum development, child psychology, and methodology courses. We would all expect education majors to take multiple courses on the actual content they will be teaching. We would all expect education majors to spend at least one semester as a teacher's aide, or somehow in the classroom before graduation. So, why is there such a HUGE discrepancy (not just within the state of Georgia) between the quality of elementary and secondary education programs? What are we doing to prepare the secondary education majors that we are NOT doing for the elementary education majors? Returning to the notion of colleges of education providing a service to consumers, will this kind of mentality work? Suppose a future teacher cannot afford to attend one of the higher-rated programs. Suppose a future teacher is not geographically close to a higher-ranked program. Is this ranking system really going to force the conveniently-located, lower-ranked schools to change? Then let's consider the highly-selective, competitive, well-preparing, highly-ranked schools. If they get overwhelmed by an increased number of applicants, will that not change their current structure? Will they let in more students, or will they have to turn more students away than they ever could have imagined? Before opening the floor to others comments and opinions, I'll end with a sad but fitting quote from the report: "Unfortunately, the fact that there are so few institutions that do well in the first edition of the Teacher Prep Review suggests that consumers will have their work cut out for them. It is not just conceivable, but likely, that many aspiring teachers and school districts will not be able to locate a highly-rated program anywhere near them." (pg. 57)

Potrebbero piacerti anche