Standard 2: Assessment System and Unit Evaluation
The unit has an assessment system that collects and analyzes data on applicant qualifications, candidate and graduate performance, and unit operations to evaluate and improve the performance of candidates, the unit, and its programs.

Evidence for the Onsite BOE Team to validate during the onsite visit

1) Changes in unit operations (i.e., advising system, admissions, etc,). What are some specific examples of changes that have resulted from research studies conducted using data generated from the unit assessment system?

 

Streamlining Assessments
The most significant change of unit operations has occurred in the revision and streamlining of massive amounts of nearly unmanageable unit-wide data.  We only had one program collecting data electronically in 2007 and have made significant steps in using electronic portfolios, numerous electronic surveys and rubrics, and implementing an electronic program review management system. In 2007, programs began implementing iWebfolio, an electronic internet-based portfolio, which allows candidates to receive organized electronic feedback on various assessments, and allows programs to collect assessment data electronically.

In 2008-09 the initial programs migrated to the Electronic Capstone Project.  The new student teaching evaluation form and exit survey were fully implemented in 2010, and a new Field Assessment Form (FAF) replaced obsolete multi-paged forms requiring cooperating teachers and supervisors to evaluate candidates on over 50 standard elements.  In addition, feedback on our programs from 2007 SPA reviewers led to a unit-wide revision of rubrics to include specific SPA content standards. These new content-specific rubrics are completed by faculty supervisors during candidates’ student teaching experience.

The Unit continued assisting programs in using TracDat, a university-wide electronic program review management system. The CEBS and Unit Assessment Team continues to review the program assessment plans and annual reports each academic year and provides feedback on the extent to which the plans address established criteria for an effective, comprehensive, and data-informed assessment system.

Course Evaluations on Faculty Teaching
An analysis of faculty evaluation resulted in the revision of the course evaluation form and the implementation of an electronic process so that all on-campus, off-campus, and on-line courses could be evaluated.  Initial analysis of the teaching quality of our faculty was analyzed across nine semesters using a review of the Instructor Evaluation Survey. This survey was completed every semester in all on- and off- campus courses. The results documented our faculty’s strong teaching performance with all means on sixteen questions ranging from 4.10-4.66 on a 5-point Likert scale. An analysis of the survey itself revealed the Cronbach’s alpha reliability coefficients on the Instructor Evaluation Survey were .970 spring semester 2008 (N = 3756) and .971 fall semester 2008 (N = 4127). Despite the high reliability of the instrument, the Unit Head undertook a revision of the survey seeking to improve it by a thorough review of the content and face validity of each item. This afforded the College Diversity Committee the opportunity to review the instrument and added a question to ensure that diversity was included in the survey. The CEBS New Faculty Evaluation Form was pilot tested in electronic format in spring 2010 and was used in all on-campus, off-campus, and on-line courses. The Cronbach’s alpha reliability coefficient for the instructor course evaluation survey gathered by an online evaluation kit was .974.  The three highest overall course evaluation items are consistent across on-campus and online courses and can be interpreted that the instructors created a classroom environment that was inclusive and respectful of diversity, instructors were knowledgeable, and assignments/tests were related to course objectives.

When paper-based evaluation survey data were analyzed, the overall mean for courses that were offered on-campus was 4.40 on a 5-point scale while the overall mean for courses that were offered off-campus through extended studies was 4.49. When online evaluation kit survey data were analyzed, the overall mean for courses that were offered on-campus was 4.51 while the overall mean for courses that were offered online was 3.98. The data show that the overall mean of on-campus courses by programs collected either by the paper-based or online surveys were all above 4.00 (100%) while a majority of the programs (5 out of 8 or 62.5%) that offered online courses received the overall mean less than 4.00. The unit-wide analysis of faculty quality will be ongoing; the 2009-2010 Unit Assessment Report provides the complete analysis.

Analysis of Unit-Wide Diversity Data
Analysis of our unit-wide data on the diversity of our faculty and candidates was one reason the Unit decided to move to “target” on Standard Four: Diversity.  We want to continue to recruit and retain diverse faculty and candidates so that our demographic data are more representative of the diversity in our state and nation.  The Teacher Education Faculty (TEF) demographic data shows a consistent pattern over the last two academic years where approximately 53% of the faculty members were female and 47% were male. 88% are Caucasian American and the largest minority ethnic/racial group is Hispanic American at 3%. In comparing the TEF with all faculty at the institution, the TEF consists of 2% more women and 2% more Caucasian Americans. Candidates experience greater diversity in faculty in the public school setting than on campus. The demographic data of our cooperating teachers shows 78% are Caucasian Americans and 5% are Hispanic or Latino. There is also greater diversity among university supervisors who reported 79% Caucasian American and 6% Hispanic or Latino.

Efforts to increase the diversity in teacher candidates is evident in the analysis of our Candidate Demographic  Data that show increased graduation rates of Hispanic and Latino initial undergraduates (8.8% in 2005-2006 to 12.8% in 2008-2009). We have also seen an increase in American Indian advanced graduates with an increase from .7% in 2005-2006 to 2.0% in 2008-2009. Contrary to the national trend, the Unit graduates are only slightly less diverse than the graduates from the institution, but the Unit graduates a higher rate of Hispanic graduates over time with 12.8% in 2008-2009 compared to 9.0% in the institution.  However, we are committed to increasing our candidate diversity and have made this one of the goals in our Diversity Initiative within http://www.unco.edu/cebs/ncate/E2Growth.html .
 
What changes have occurred as a result of studies on the fairness, accuracy, and consistency of assessment procedures?

Dispositions Rubric
The analysis report on the development of the new Disposition Rubric is an excellent example of a change that has occurred as a result of studies on the fairness, accuracy, and consistency of assessment procedures.  The original PDQ, a 30-item tool on a 6-point Likert scale, was revised following the 2008 PDQ analysis completion of a factor analysis and Cronbach’s alpha reliability coefficients that found there were limitations to the PDQ.  Initial data analysis on the PDQ indicated that teacher candidates rated themselves very high (Always and Frequently) in all three areas of the PDQ. In addition, data from the cooperating teachers did not differentiate teacher candidates well enough.

The Unit-wide Dispositions Committee Faculty revised the PDQ in 2009-2010 and provided the Unit with a new Dispositions Rubric intended to measure “Engagement,” Effort,” “Initiative,” and “Fairness and Equity” during on-campus coursework as well as field experiences. Unlike the original PDQ, the new instrument is not based on a Likert scale, but provides clear and detailed explanations for the four performance levels: unsatisfactory, developing, proficient, and advanced. In addition, the new section on “Fairness and Equity,” was added to align the instrument with our Unit/NCATE beliefs that candidates strive to meet the educational needs of all students in a caring, non-discriminatory, and equitable manner and demonstrate the belief that all students can learn. The study of this instrument is ongoing. 

Reading Content Examinations
A Reading Content Examination, (RCE) developed by the Reading Program faculty, was implemented as a pre and post assessment in selected literacy courses within the elementary and elementary post-baccalaureate programs during the 2009 Fall Semester. Initial analysis of the pre-test results showed that at the beginning of their literacy methods courses, candidates across both programs obtained an average score of 58%. This indicates that candidates have developed a beginning level of literacy attending courses prior to methods. However, students who responded at the conclusion of the semester accurately answered 65% of the questions. Of those classes taking the test as a pre-exam the average percentage correct ranged from 54% for EDRD 410 (Achieving Effective Instruction in Developmental Reading) to 73% for EDEL 540 (Effective Instruction in Elementary School English/Language Arts). Among the three classes taking the exam at the end of the semester, scores ranged from 58% for EDEC 460 (Early Childhood Curriculum I Language Arts and Social Studies) to 69% for EDRD 411 (Elementary Reading Diagnosis and Individualization). Analysis was also conducted looking at percentage of correct responses for each class. Instructors used this information to recognize specific areas of strengths and weaknesses and modified instruction accordingly.

The reliability of the exam was high with a Cronbach’s alpha of .819. The results of a factor analysis suggested that there were over 40 factors that explained 86% of the variance. The results of the factor analysis (numerous factors explaining a relatively small portion of the variance) raised questions about the validity of the exam. We hypothesized that the problem-solving nature of the questions could be influencing the results. The answers are not easily determined and require comprehension, application, and in-depth levels of analysis. It is possible that the questions are too difficult to load onto one factor. The faculty continue to analyze the instrument for content validity and greater reliability.

Based on the findings of the RCE analysis, the faculty decided during summer 2010 that perhaps it is more suitable to implement a more developmentally appropriate examination that would be used in individual courses as pre-post measures, rather than using an examination that was comprehensive of the entire program.  The Emergent Content Examination (ERCE) was created and implemented in fall 2010.  Preliminary analysis of the examination reveals that considerably more time needs to be spent in the development of this reading examination.  

Final Student Teaching Evaluation
The new student teaching evaluation form and exit survey that was under construction for the last two years was implemented across 22 programs during the 2009-2010 academic year.  Initial analysis of the reliability of the student teaching instrument revealed Cronbach’s alpha reliability coefficient was .923.  The instrument is being discussed in faculty meetings during fall 2010 in order to determine the content validity of the survey. Recommendations for the revision of several questions have already been gathered.

Omission of “Not Observed” Rubric Option
During the summer 2010 data analysis process, it was noticed that some student teaching rubrics used in the Elementary Post Bac Program and the Special Education Generalist Programs contained a “Not Observed” option.  This option makes analysis of candidate proficiency difficult because it is not possible to document that all candidates met all the individual indicators.  Analysis of the questions on which candidates were rated “not observed” revealed that in some situations the questions were inappropriate and needed to be rewritten if the “not observed” option was to be continued.  The Special Education Generalist MA decided to completely change their performance-based checklist to align with the clearly-stated CEC standards rather than the Performance-based Standards for Colorado Teachers because the state standards were stated in vague terms that limited reliability and validity. In both situations, the faculty decided to delete the option all together.  

Reliability and Inter-Rater Reliability of Final Student Teaching Form
The new Final Student Teaching Form was fully implemented in fall 2009.  During the summer following the 2009-2010 academic year, reliability analysis using Cronbach’s alpha and independent t-test was ran using SPSS, a statistical software commonly used in social science research.  Secondary PTEP and Elementary PTEP data were analyzed independently from each other and reliability was obtained for fall and spring semesters separately.  Crohbach’s alpha, a measure of internal consistency, is considered acceptable in most social sciences with a reliability coefficient of .70 or higher.  As illustrated in the Final Student Teaching Form Analysis, the reliability coefficients for Elementary and Secondary PTEP exceed .70 both semesters. Secondary PTEP Fall semester data resulted in a reliability coefficient of .962 and spring semester data resulted in a reliability coefficient of .963.  Similar to Secondary PTEP, Elementary PTEP’s data resulted in a reliability coefficient of .965 for fall semester and for spring semester of .949.  The results provide evidence that both Elementary and Secondary PTEP Final Form have statistically sound reliability.  An independent t-test was run for Elementary and Secondary PTEP to compare the distribution of the cooperating teacher and university supervisors sample means for each question. The number of teacher candidates that completed the form from each group, Elementary and Secondary PTEP, were different and thus a weighted mean was calculated prior to running the analysis to obtain an accurate combined population variance.  Normality assumption was evaluated by performing a normality test and equality of variances was examined and verified through Levene’s test.  Further analyses will be conducted as the evaluation of the form evolves.

Assessments for Early Field Experiences
The elementary, early childhood, and elementary post bac coordinators met during summer 2010 to discuss the omission of program assessment in the early program practicum experiences.  It seemed that in the effort to streamline assessment and data collection, instruments that provided feedback about candidates prior to student teaching had been omitted.  Faculty developed two new instruments: 1) a six-question survey on candidates’ dispositions for cooperating teachers to complete at mid-term, and 2) a final practicum evaluation for cooperating teachers to complete on the candidates at the end of the practicum semester.  In addition, the previously used candidate exit-survey was reinstated so candidates can self-rate their proficiency on the standards and evaluate the practicum experience and their cooperating teachers. The data will be analyzed in January 2011 and returned to the faculty by March 2011 so faculty can make necessary changes for fall 2011.

2) Program changes based on research studies and data. What are some specific program changes that have resulted from data collected from the unit assessment system?  

 

The 2007 Elementary PTEP Revision was guided substantially by a research study and program data. A reading program research study was conducted of ten elementary programs in universities participating in Teacher for a New Era.  Faculty believed reviewing state-of-the-art programs at selected schools of education would help inform our own program revisions.  When a preliminary revision was developed, over 20 cooperating teachers completed a survey asking for feedback on a significantly changed “block” program. The results of the 2006 and 2007 1st and 2nd year graduate surveys also informed the changes.  Math and science courses in the Elementary Interdisciplinary Liberal Arts major, implemented in 2000, were revised based on candidate feedback in ongoing advising surveys administered by the ISET Advising Center between 2005-2009. 

Feedback from candidate surveys in 2008-2009 on their early field experience indicated a need for change. Candidates were tutoring after school in local schools but without direct supervision and faculty members were somewhat disconnected from the type of tutoring our candidates were providing.  These challenges resulted in the formation of a partnership committee consisting of literacy faculty and local administrators to develop a more meaningful experience for 2009-2010.  The committee developed the Reading Achiever Program that closely connects the candidates’ coursework with the after school tutoring experience.  In addition, literacy coaches from the district and faculty supervise the tutoring and debrief with the candidates following the tutoring experience.              
Ongoing data from the Secondary Professional Teacher Education Program has guided discussions and decisions made by the STEP Coordinating Council for the past two years.  Content in the field experience phases has been redesigned to address candidate challenges with using assessment to differentiate instruction, classroom management, and working with English language learners. The council convened faculty who teach the education prerequisite courses and charged them with the responsibility of creating a curriculum map of how the three program themes are embedded in the coursework. Newly-organized program phases will be implemented fall 2011.

The Elementary Post-Bac Program has been under redesign for the last year.  The first models were guided by an Eduventure report on post bac programs in the state.  Two different models were created during 2009-2010, but faculty members were not satisfied with either design.  A committee was convened to work during summer 2010 and completed a more specific studyof regional post bac programs and conducted a candidate survey of the Summer 2010 Cohort in order to get ideas from candidates currently in the program. The committee developed 11 assumptions to guide their decision making, including an understanding that the type of innovations desired by the faculty could only be completed with another year of planning.  Therefore, the first-phase design reduces the number of program credits from 48 to 40 and aligns the program with other regional program requirements.  The elementary faculty approved the new design on September 13, 2010 and the new program will begin summer 2011.  The committee will continue its work and have the second-phase revision completed by fall 2011 for implementation in summer 2012.

3) Involvement of P-12 educators in the unit assessment system? How were P-12 educators involved in the changes that have occurred in the unit assessment system? How do school partners participate in the ongoing improvement of the educator preparation programs at UNC? How have school partners benefited from changes at UNC?

 

The Professional Education Council (PEC) offers representatives from local school districts an opportunity to join in the ongoing conversations related to the unit assessment system and ongoing challenges of educator preparation.  One ongoing conversation in 2009-2010 involved the use of the Teacher Insight online questionnaire as a screening device for student teaching placement in one local school district.  A Teacher Insight Research Studywas conducted on the questionnaire and the study was discussed by PEC; the study was also provided to the school district in an attempt to inform the district’s decision making.

The Regional District Task Force provides another opportunity for P-12 educators to be involved in the unit assessment system.  In fact, the task force members suggested the Unit conduct focus groups of principals and cooperating teachers in order to determine strengths and needs of our programs. The focus groups were conducted in spring 2009 and the results and program changes planned for spring 2010 were reported to the task force at the November 2009 meeting.   http://www.unco.edu/cebs/ncate/Focus_group_data_schedule_questions.pdf  The Response to Focus Group Analysis Reportdetails implemented changes.
 
P-12 educators are also involved in the unit assessment system by completing principal,
cooperating teacher, and university supervisor surveys very semester/year.  These data are analyzed, returned to faculty for analysis, and used to inform program changes. One of the areas of concern frequently reported in the qualitative analysis of open-ended questions about the
programs was a lack of clarity in cooperating teacher and supervisor expectations.  This concern was addressed by creating web pages outlining expectations and resources for Field Supervisor, Cooperating Teacher, and Student Teacher

4) Difference in programs and candidates’ performance data in the offsite programs compared to programs offered on-campus? Are there any differences? If so what?

 

When appropriate, assessment data are disaggregated by on-campus and off-campus and distance learning programs. Some of our programs are offered in distance learning format but not offered on campus. The Post Baccalaureate Elementary licensure program is offered in three locations and data are disaggregated by location. The Elementary Program (on-campus) and Elementary Program at CUE (Denver off-campus) have different coursework and data are disaggregated. The School Psychology Program Ed.S. is offered on-campus and off-campus. A review of the data from these programs shows candidates are performing at the same level of proficiency in off-campus programs as on-campus programs.

5) Changes that have occurred in the off-campus programs based on data generated by the assessment system. What changes have resulted?

 

As a part of 2005-2006 U.S. Department of Education grant, the College of Education and Behavioral Sciences Dean’s Office completed an extensive program review of the elementary initial licensure programs, including the on-campus program, the post-baccalaureate programs (on and off campus), and the Center for Urban Education in Denver. The Unit Head provided additional funding for the evaluation to be continued during the 2006-2007 academic year. The analysis documented that 1st and 2nd year graduates in the off-campus program at the Center for Urban Education rated their perceptions of their teacher proficiency higher than did the on-campus graduates.  The elementary Post Bac graduates rated their proficiency the lowest of the three groups.  The results of this study were used, in part, to revise both of the on-campus programs.

6) P-12 partners’ access to information about UNC. Where could a school partner attain information about the quality of UNC candidates’ performance?

 

The CEBS website (www.unco.edu/cebs) provides several links to unit-wide, state, and national reports under the “Program Quality” link where data analyses and data-informed unit and program reports are available.

The Associate Dean and NCATE Coordinator compiles data on an annual basis and produces the following reports that are posted on the CEBS website:

Annual Reports on Student Teaching Preparation

Annual Job Fair Recruiter/Employee Reports

Performance Contract Teacher Education Report, a state-required report documenting progress on six standards including increasing diversity in candidates and faculty.

Title II Reports

Annual NCATE Part C Reports

Annual Unit Assessment Reports

 

An electronic portfolio, the CEBS Assessment System and Unit Evaluation Portfolio, implemented in spring 2007 is posted at a public link. The portfolio includes each program’s annual program review documents (including the 2007 SPA Reports).  The http://www.unco.edu/cebs/students/iwebfolio.html