C. 1. Narrative. Standard 2: Assessment System and Unit Evaluation

Assessment System

The Unit assessment system is guided by and articulated in the Unit Conceptual Framework. We believe in purposeful, systematic, and ongoing evaluation of candidate, faculty, program, and unit-wide performance. Our assessment system includes a methodical and deliberate approach to planning, implementing, and evaluating assessment strategies and data with the ultimate goal of ensuring that all candidates who enter our programs exit with the knowledge, skills, and dispositions necessary to be successful in improving student learning. The Unit and the professional community is committed “to continuing improvement based on analysis of evidence of candidate and program accomplishment relative to professional standards and guidelines.” (Conceptual Framework)

The Unit assessment system is directed by the NCATE Leadership Team consisting of the Unit Head, the Assistant Dean of the College of Education and Behavioral Sciences (also the NCATE Coordinator), the Director of the School for Teacher Education, the Director of the School of Special Education, and the Assessment Co-Coordinator. The Office of Budget and Institutional Analysis (OBIA) is another important partner in the Unit-wide assessment system and provides access to institutional data and analysis support. An important feature of the Banner Information Management System (known at UNC as Ursa) is the Insight reporting portal that allows all faculty members and program coordinators to gain access to current data.

The Unit Assessment Handbook makes the Unit Assessment System readily available to all Unit faculty and administrators. The collection, analysis, and evaluation of data from various program transition points are managed at three different organization levels: 1) candidate level assessment; 2) program level assessment; and 3) Unit level assessment. The Handbook outlines the assessment levels, sources of data, who collects and analyzes the data, and when data sources are collected and data returned to programs. It prescribes procedures to ensure key assessments of candidate performance and evaluations of Unit operations are fair, accurate, consistent, and free of bias.

Data Collection, Analysis, and Evaluation

Every academic program involved in the preparation of teachers and other school professionals has identified key assessments for Initial and Advanced transition points across their programs.  These transition points include assessments for evaluating candidates’ knowledge, skills and professional dispositions. The transition points include: admission, entry to clinical practice, exit from clinical practice, program completion, and after program completion.

  • At the candidate assessment level, program coordinators are primarily responsible for managing the use of the various candidate assessments, and for assuring all scores are compiled and stored either electronically or in paper format. The data are then summarized and analyzed by coordinators or in the CEBS Dean’s Office. Individuals responsible for inputting data and analyzing data are accountable to a schedule that requires academic year data analysis be completed and returned to program directors and coordinators annually by August 15th.
  • At the program assessment level, there are four steps that use the assessment data to inform program quality on an annual bases: 1. review of annual assessment profiles and admissions data; 2) evaluate data profiles using the assessment data worksheet; 3) enter data evaluation and program strengths and limitations into TracDat for an ongoing record of how each program is using data to inform continuous program quality; and 4) complete annual university program review report. Samples of these four items for selected programs are available at Annual Assessment Management Reports.
  • At the Unit assessment level, the NCATE Coordinator organizes, analyzes, and reports Unit-wide candidate, faculty, and program data that are shared systematically with Unit administrators, faculty, candidates and stakeholders. The CEBS website and CEBS Assessment System and Unit Evaluation Portfolio contain multiple university, state, and nationally required reports and information for internal and external constituents.

The Admissions data cuts across all three levels of assessment. In Initial programs, the Checkpoint courses provide a very detailed picture of candidates meeting admission standards. Candidates take EDFE 110 at initial admission, EDFE 120 at Full admission, (Postbac students combine both courses in EDFE 125) and EDFE 130 as their application for student teaching. To pass any of these three courses, a candidate must meet certain GPA requirements (including major GPA), approval of the program coordinator (who reviews transcripts), pass criminal background check, and demonstrate understanding of teacher education policies. The data below shows that the programs are selective and maintain strict adherence to the established requirements.

08-09 AY

COURSE N# Enrolled N Passed % Passed Mean GPA Mean GPA Passed Mean GPA Not Passed
EDFE 110 726 601.00 82.78% 3.13 3.22 2.72
EDFE 120 529 460.00 86.96% 3.37 3.42 3.08
EDFE 125 111 102.00 91.89% 3.59 3.61 3.23
EDFE 130 574 550.00 95.82% 3.47 3.48 3.08
Total 1940 1,713.00 88.30% 3.32 3.38 2.88

09-10 AY

COURSE N# Enrolled N Passed % Passed Mean GPA Mean GPA Passed Mean GPA Not Passed
EDFE 110 898 685.00 76.28% 3.10 3.22 2.73
EDFE 120 571 480.00 84.06% 3.32 3.35 3.15
EDFE 125 143 129.00 90.21% 3.59 3.65 2.88
EDFE 130 525 336.00 64.00% 3.46 3.51 3.35
Total 2137 1,630.00 76.28% 3.28 3.35 3.05

All advanced programs comply with the Graduate School’s rigorous admission policies. They must demonstrate 3.0 GPA in the last 60 credit hours of coursework, or a combined GRE score of 1000 or greater. Additionally most programs also require an application essay or a writing sample, three letters of recommendation, and a valid teaching license. The unit-wide mean GPA on admission for Advanced Programs tends to be higher than that for non-teacher education graduate programs.



Unit GPA

Non-Unit GPA

















The Unit is intentional in addressing any informal and formal candidates’ complaints in a responsive and judicious manner. Candidates are directed to the university Student Code of Conduct and the Student Handbook as a resource to better understand the policies on how their complaints are handled. As a part of the resolution process, candidates are instructed to initially address their instructor or advisor with their concern. If the matter is not resolved, the candidate appeals to the program coordinator and then the School Director. If the issue is not resolved at this level, the candidate may appeal to the Dean of the college. A chronicle of candidate concerns and any letter of formal dismissal is maintained by school directors and college deans. Candidates provide on-going feedback on the performance of STE staff, Teacher Education Advisors, and through program-specific surveys; they complete teaching evaluations on instructors for every course. The data is reviewed by school directors and deans on annual and semi-annual basis. Informal gatherings such as the “lemonade and cookies" event and focus groups are used to help gather additional information from candidates.

When appropriate, assessment data are disaggregated by on-campus and off-campus and distance learning programs. Some of our programs are offered in distance learning format but not offered on campus. The Post Baccalaureate Elementary licensure program is offered in three locations and data are disaggregated by location. The Elementary Program (on-campus) and Elementary Program at CUE (Denver off-campus) have different coursework and data are disaggregated. The School Psychology Program Ed.S. is offered on-campus and off-campus. A review of the data from these programs shows candidates are performing at the same level of proficiency in off-campus programs as on-campus programs.

In addition, as a part of 2005-06 U.S. Department of Education grant, the College of Education and Behavioral Sciences Dean’s Office completed an extensive program review of the elementary initial licensure programs, including the on-campus program, the post-baccalaureate programs (on and off campus), and the Center for Urban Education in Denver. The Unit Head provided additional funding for the evaluation to be continued during the 2006-2007 academic year.

Use of Data for Program Improvement

The next section reports examples of changes made to courses, programs, and the Unit in response to data gathered from the assessment system. See also B.2 Section of this report.

2007 Elementary PTEP Revision (the project is described in B.2.): In 2008/09, the program has migrated to the Electronic Capstone Project, new student teaching evaluation form and exit survey, and a new Field Assessment Form (FAF).

Secondary PTEP Revision: Feedback from SPA reviewers about secondary and K-12 programs submitted September 2007 have provided opportunities for programs to create student teaching evaluation and exit survey instruments that specifically assess SPA content standards. These new content-specific rubrics are completed by faculty supervisors during candidates’ student teaching experience. The program transitioned to an electronic Work Sample Portfolio, and designed a new Lesson Observation form.

Evaluation of Faculty Teaching Performance

At the Unit level, the teaching quality of our faculty was analyzed across the last nine semesters using a review of the Instructor Evaluation Survey. This survey is completed every semester in all on- and off- campus courses. The results documented our faculty’s strong teaching performance with all means on sixteen questions ranging from 4.10-4.66 on a 5-point Likert scale. An analysis of the survey itself revealed the Cronbach’s alpha reliability coefficients on the Instructor Evaluation Survey were .970 spring semester 2008 (N = 3756) and .971 fall semester 2008 (N = 4127). Despite the high reliability of the instrument, the Unit Head undertook a revision of the survey seeking to improve it by a thorough review of the content and face validity of each item. The College Diversity Committee was afforded the opportunity to review the instrument and added a question to ensure that diversity was included in the survey. The CEBS New Faculty Evaluation Form is being pilot tested in electronic format at the end of  Spring Semester 2010, and will be used in all on-campus, off-campus, and on-line courses.

Various sources of data indicated classroom management as a perceived weakness of our candidates. The multiple ways of writing lesson plans were also noted as confusing and sometimes contradictory. In response, the Unit faculty have developed classroom management, classroom assessment, and lesson planning guides.