evaluation. The numerical outcome of the scoring process results in a pass/no pass for the written comprehensive examination.
(See Scoring Guide below for more information on scoring the written comprehensive examination.)
The Comprehensive Examination Evaluation Rubric is built upon a simple multi-point rating scale. The Thinking (content) part comprises 60 percent of the total score,
characteristics of the written responses evaluated by the examiners.
The score for a written comprehensive examination is obtained by adding the total scores assigned to the seven categories in the Thinking and Communication sections. A
The extent to which questions test different levels of learning will depend on the subject, context and level of the course or module. For example, a question that seeks to test the analytical skills of first year undergraduates may test only knowledge and comprehension in third year students. An online journal article that addresses some of these issues and offers some suggestions that may be useful in overcoming some of these problems may be found at . There are also different interpretations of the terms analysis, synthesis and evaluation. Bloom's taxonomy can serve as a useful framework but may need to be refined to meet the needs of a specific assessment or course.
The ultimate goal of reading instruction at the secondary level is comprehension—gaining meaning from text. A number of factors contribute to students’ not being able to comprehend text. Comprehension can break down when students have problems with one or more of the following: (a) decoding words, including structural analysis; (b) reading text with adequate speed and accuracy (fluency); (c) understanding the meanings of words; (d) relating content to prior knowledge; (e) applying comprehension strategies; and (f) monitoring understanding (; ; ).
Our educational system expects that secondary students are able to decode fluently and comprehend material with challenging content (). Some struggling secondary readers, however, lack sufficient advanced decoding, fluency, vocabulary, and comprehension skills to master the complex content ().
Using the examples in the an attempt is made to illustrate a logical level-by-level progression from the assessment of lower order skills through to higher order skills. These examples, with the notable exception of synthesis, make use of MCQs in order to keep the explanation as simple as possible and to show how higher order learning can be assessed in this way. At the analysis level, an example of an assertion-reason question type is also given. A brief description of skills to be assessed and question words often used to test those skills is also given.
Although there is still debate over exactly how many and what words are essential for students to learn so as to become skillful readers, there is no question that skillful readers learn words by the thousands. There is also no doubt that without instructional intervention, the vocabulary gap between more and less skillful readers continues to widen over time.
COMPREHENSION: grasping meaning and interpreting in own words
Question Words: interpret, discuss, predict, classify, summarise
Whilst objective testing, by definition, necessitates predetermined answers, this does not mean it is limited to the lower levels of knowledge, comprehension, and application. A predetermined answer can range from a simple, single response to far more complex arrays or combinations of responses that comprise the only appropriate answer to a given question. However, this depends on the quality of the questions and the degree of creativity used in their design, relative to maintenance of the status quo of traditional assessment methods. Although objective tests can assess a wide range of skills, knowledge and abilities, designing test questions to assess higher order skills can be time consuming and requires skill and creativity.
A group of cognitive psychologists, curriculum theorists and instructional researchers, and testing and assessment specialists published in 2001 a revision of Bloom’s Taxonomy with the title . This title draws attention away from the somewhat static notion of “educational objectives” (in Bloom’s original title) and points to a more dynamic conception of classification.
"The basic premise of pragmatism is that questions posed by speculative metaphysical propositions can often be answered by determining what the practical consequences of the acceptance of a particular metaphysical proposition are in this life. Practical consequences are taken as the criterion for assessing the relevance of all statements or ideas about truth, norm and hope."
This article reports a synthesis of intervention studies conducted between 1994 and 2004 with older students (Grades 6–12) with reading difficulties. Interventions addressing decoding, fluency, vocabulary, and comprehension were included if they measured the effects on reading comprehension. Twenty-nine studies were located and synthesized. Thirteen studies met criteria for a meta-analysis, yielding an effect size (ES) of 0.89 for the weighted average of the difference in comprehension outcomes between treatment and comparison students. Word-level interventions were associated with ES = 0.34 in comprehension outcomes between treatment and comparison students. Implications for comprehension instruction for older struggling readers are described.
This question requires prior knowledge of and understanding about the concept of pragmatism. The paragraph, seen in this light, contains one word that vitiates its validity, and the student is tested on his/her ability to analyse it to see whether it fits with the accepted definition of pragmatism. With this in mind, 2. is correct. Option 1. would degrade the paragraph further, while 3. and 4. would simply result in changing to acceptable synonyms. Note that this question does not address Level 6 (Evaluation), as the student is not asked to pass a value judgement on the text.