Differences between the AHM and the rule space method
The AHM differs from Tatsuoka's Rule Space Method (RSM) with the assumption of dependencies among the attributes within the cognitive model. In other words, the AHM was derived from RSM by assuming that some or all skills may be represented in hierarchical order. Modeling cognitive attributes using the AHM necessitates the specification of a hierarchy outlining the dependencies among the attributes. As such, the attribute hierarchy serves as a cognitive model of task performance designed to represent the inter-related cognitive processes required by examinees to solve test items. This assumption better reflects the characteristics of human cognition because cognitive processes usually do not work in isolation but function within a network of interrelated competencies and skills. In contrast, the RSM makes no assumptions regarding the dependencies among the attributes. This difference has led to the development of both IRT and non-IRT based psychometric procedures for analyzing test item responses using the AHM. The AHM also differs from the RSM with respect to the identification of the cognitive attributes and the logic underlying the diagnostic inferences made from the statistical analysis.Identification of the cognitive attributes
The RSM uses a post-hoc approach to the identification of the attributes required to successfully solve each item on an existing test. In contrast, the AHM uses an a priori approach to identifying the attributes and specifying their interrelationships in a cognitive model.Diagnostic inferences from statistical analysis
The RSM using statistical pattern classification where examinees' observed response patterns are matched to pre-determined response patterns that each correspond to a particular cognitive or knowledge state. Each state represents a set of correct and incorrect rules used to answer test items. The focus with the RSM is identification of erroneous rules or misconceptions. The AHM, on the other hand, uses statistical pattern recognition where examinees' observed response patterns are compared to response patterns that are consistent with the attribute hierarchy. The purpose of statistical pattern recognition is to identify the attribute combinations that the examinee is likely to possess. Hence, the AHM does not identify incorrect rules or misconceptions as in the RSM.Principled test design
The AHM uses a construct-centered approach to test development and analysis. Construct-centered emphasizes the central role of the construct in directing test development activities and analysis. The advantage of this approach is that the inferences made about student performance are firmly grounded in the construct specified. Principled test design encompasses 3 broad stages: # cognitive model development # test development # psychometric analysis. Cognitive model development comprises the first stage in the test design process. During this stage, the cognitive knowledge, processes, and skills are identified and organized into an attribute hierarchy or cognitive model. This stage also encompasses validation of the cognitive model prior to the test development stage. Test development comprises the second stage in the test design process. During this stage, items are created to measure each attribute within the cognitive model while also maintaining any dependencies modeled among the attributes. Psychometric analysis comprises the third stage in the test design process. During this stage, the fit of the cognitive model relative to observed examinee responses is evaluated to ascertain the appropriateness of the model to explain test performance. Examinee test item responses are then analyzed and diagnostic skill profiles created highlighting examinee cognitive strengths and weaknesses.Cognitive model development
What is a cognitive model?
An AHM analysis must begin with the specification of a cognitive model of task performance. A cognitive model in educational measurement refers to a "simplified description of human problem solving on standardized educational tasks, which helps to characterize the knowledge and skills students at different levels of learning have acquired and to facilitate the explanation and prediction of students' performance". These cognitive skills, conceptualized as an attribute in the AHM framework, are specified at a small grain size in order to generate specific diagnostic inferences underlying test performance. Attributes include different procedures, skills, and/or processes that an examinee must possess to solve a test item. Then, these attributes are structured using a hierarchy so the ordering of the cognitive skills is specified. The cognitive model can be represented by various hierarchical structures. Generally, there are four general forms of hierarchical structures that can easily be expanded and combined to form increasingly complex networks of hierarchies where the cognitive complexity corresponds to the nature of the problem solving task. The four hierarchical forms include: a) linear, b) convergent, c) divergent, and d) unstructured.How are cognitive models created and validated?
Theories of task performance can be used to derive cognitive models of task performance in a subject domain. However, the availability of these theories of task performance and cognitive models in education are limited. Therefore, other means are used to generate cognitive models. One method is the use of aWhy is the accuracy of the cognitive model important?
An accurate cognitive model is crucial for two reasons. First, a cognitive model provides the interpretative framework for linking test score interpretations to cognitive skills. That is, the test developer is in a better position to make defensible claims about student knowledge, skills, and processes that account for test performance. Second, a cognitive model provides a link between cognitive and learning psychology with instruction. Based on an examinee’s observed response pattern, detailed feedback about an examinee’s cognitive strengths and weaknesses can be provided through a score report. This diagnostic information can then be used to inform instruction tailored to the examinee, with the goals of improving or remediating specific cognitive skills.An example of a cognitive model
The following hierarchy is an example of a cognitive model task performance for the knowledge and skills in the areas of ratio, factoring, function, and substitution (called the Ratios and Algebra hierarchy).Gierl, M. J., Wang, C., & Zhou, J. (2008). Using the attribute hierarchy method to make diagnostic inferences about examinees’ cognitive skills in algebra on the SAT. Journal of Technology, Learning, and Assessment, 6 (6). Retrieved 24 October 2008, from http://www.jtla.org. This hierarchy is divergent and composed of nine attributes which are described below. If the cognitive model is assumed to be true, then an examinee who has mastered attribute A3 is assumed to have mastered the attributes below it, namely attributes A1 and A2. Conversely, if an examinee has mastered attribute A2, then it is expected that the examinee has mastered attribute A1 but not A3.Cognitive model representation
The Ratio and Algebra attribute hierarchy can also be expressed in matrix form. To begin, the direct relationship among the attributes is specified by a binaryTest development
Role of the cognitive model in item development
The cognitive model in the form of an attribute hierarchy has direct implications for item development. Items that measure each attribute must maintain the hierarchical ordering of the attributes as specified by the cognitive model while also measuring increasingly complex cognitive processes. These item types may be in eitherApproach to item development
The attributes in the cognitive model are specified at a fine grain size in order to yield a detailed cognitive skill profile about the examinee's test performance. This necessitates many items that must be created to measure each attribute in the hierarchy. For computer-based tests, automated item generation (AIG) is a promising method for generating multiple items "on the fly" that have similar form and psychometric properties using a common template.Example of items aligned to the attributes in a hierarchy
Referring back to the pictorial representation of Ratio and Algebra hierarchy, an item can be constructed to measure the skills described in each of the attributes. For example, attribute A1 includes the most basic arithmetic operation skills, such as addition, subtraction, multiplication, and division of numbers. An item that measures this skill could be the following: examinees are presented with the algebraic expression , and asked to solve for (t + u). For this item, examinees need to subtract 3 from 19 and then divide 16 by 4. Attribute A2 represents knowledge about the property of factors. An example of an item that measures this attribute is "If (p + 1)(t – 3) = 0 and ''p'' is positive, what is the value of t?” The examinee must know the property that the value of at least one factor must be zero if the product of multiple factors is zero. Once this property is recognized, the examinee would be able to recognize that because p is positive, (t – 3) must be zero to make the value of the whole expression zero, which would finally yield the value of 3 for t. To answer this item correctly, the examinee should have mastered both attributes A1 and A2. Attribute A3 represents not only knowledge of factoring (i.e., attribute A2), but also the skills of applying the rules of factoring. An example of an item that measures this attribute is “”. Only after the examinee factors the second expression into the product of the first expression would the calculation of the value of the second expression be apparent. To answer this item correctly, the examinee should have mastered attributes A1, A2, and A3.Psychometric analysis
During this stage, statistical pattern recognition is used to identify the attribute combinations that the examinee is likely to possess based on the observed examinee response relative to the expected response patterns derived from the cognitive model.Evaluating model-data fit
Prior to any further analysis, the cognitive model specified must accurately reflect the cognitive attributes used by the examinees. It is expected that there will be discrepancies, or slips, between observed response patterns generated by a large group of examinees and the expected response patterns. The fit of the cognitive model relative to the observed response patterns obtained from examinees can be evaluated using the Hierarchical Consistency Index. The ''HCI'' evaluates the degree to which the observed response patterns are consistent with the attribute hierarchy. The ''HCI'' for examinee ''i'' is given by:Why is model-data fit important?
Obtaining good model-data fit provides additional evidence to validate the specified attribute hierarchy, which is required before proceeding with determination of an examinee’s attribute mastery. If the data is not shown to fit the model, then various reasons may account for the large number of discrepancies including: a misspecification of the attributes, incorrect ordering of attributes within the hierarchy, items not measuring the specified attributes, and/or the model is not reflective of the cognitive processes used by a given sample of examinees. Therefore, the cognitive model should be correctly defined and closely aligned with the observed response patterns in order to provide a substantive framework for making inferلبلبences about a specific group of examinees’ knowledge and skills. لابلبEstimating attribute probabilities
Once we establish that the model fits the data, the attribute probabilities can be calculated. The use of attribute probabilities is important in the psychometric analyses of the AHM because these probabilities provide examinees with specific information about their attribute-level performance as part of the diagnostic reporting process. To estimate the probability that examinees possess specific attributes, given their observed item response pattern, an artificialBrief description of a neural network
The neural network is a type of parallel-processing architecture that transforms any stimulus received by the input unit (i.e., stimulus units) to a signal for the output unit (i.e., response units) through a series of mid-level hidden units. Each unit in the input layer is connected to each unit in the hidden layer and, in turn, to each unit in the output layer. Generally speaking, a neural network requires the following steps. To begin, each cell of the input layer receives a value (0 or 1) corresponding to the response values in the exemplar vector. Each input cell then passes the value it receives to every hidden cell. Each hidden cell forms a linearly weighted sum of its input and transforms the sum using theSpecification of the neural network
Calculation of attribute probabilities begins by presenting the neural network with both the generated expected examinee response patterns from Stage 1, with their associated attribute patterns which is derived from the cognitive model (i.e., the transpose of the Qr matrix), until the network learns each association. The result is a set of weight matrices that will be used to calculate the probability that an examinee has mastered a particular cognitive attribute based on their observed response pattern. An attribute probability close to 1 would indicate that the examinee has likely mastered the cognitive attribute, whereas a probability close to 0 would indicate that the examinee has likely not mastered the cognitive attribute.Reporting the results
The importance of the reporting process
Score reporting serves a critical function as the interface between the test developer and a diverse audience of test users. A score report must include detailed information, which is often technical in nature, about the meanings and possible interpretations of results that users can make. The Standards for Educational and Psychological Testing clearly define the role of test developers in the reporting process. Standard 5.10 states: When test score information is released to students, parents, legal representatives, teachers, clients, or the media, those responsible for testing programs should provide appropriate interpretations. The interpretations should describe in simple language what the test covers, what the scores mean, and how the scores will be used.Reporting cognitive diagnostic results using the AHM
A key advantage of the AHM is that it supports individualized diagnostic score reporting using the attribute probability results. The score reports produced by the AHM have not only a total score but also detailed information about what cognitive attributes were measured by the test and the degree to which the examinees have mastered these cognitive attributes. This diagnostic information is directly linked to the attribute descriptions, individualized for each student, and easily presented. Hence, these reports provide specific diagnostic feedback which may direct instructional decisions. To demonstrate how the AHM can be used to report test scores and provide diagnostic feedback, a sample report is presented next. In the example to the right, the examinee mastered attributes A1 and A4 to A6. Three performance levels were selected for reporting attribute mastery: non-mastery (attribute probability value between 0.00 and 0.35), partial mastery (attribute probability value between 0.36 and 0.70), and mastery (attribute probability value between 0.71 and 1.00). The results in the score report reveal that the examinee has clearly mastered four attributes, A1 (basic arithmetic operations), A4 (skills required for substituting values into algebraic expressions), A5 (the skills of mapping a graph of a familiar function with its corresponding function), and A6 (abstract properties of functions). The examinee has not mastered the skills associated with the remaining five attributes.Implications of AHM for cognitive diagnostic assessment
Integration of assessment, instruction, and learning
The rise in popularity of cognitive diagnostic assessments can be traced to two sources: assessment developers and assessment users.Huff, K., & Goodman, D. P. (2007). The demand for cognitive diagnostic assessment. In J. P. Leighton & M. J. Gierl (Eds.), Cognitive diagnostic assessment for education: Theory and applications (pp. 19–60). Cambridge, UK: Cambridge University Press. Assessment developers see great potential for cognitive diagnostic assessments to inform teaching and learning by changing the way current assessments are designed. Assessment developers also argue that to maximize the educational benefits from assessments, curriculum, instruction, and assessment design should be aligned and integrated. Assessment users, including teachers and other educational stakeholders, are increasingly demanding relevant results from educational assessments. This requires assessments to be aligned with classroom practice to be of maximum instructional value. The AHM to date, as a form of cognitive diagnostic assessment, addresses the path between curriculum and assessment design by identifying the knowledge, skills, and processes actually used by examinees to solve problems in a given domain. These cognitive attributes organized into a cognitive model becomes not only representation of the construct of interest, but also the cognitive test blueprint. Items can then be constructed to systematically measure each attribute combination within the cognitive model. The path between assessment design and instruction is also addressed by providing specific, detailed feedback about an examinee's performance in terms of the cognitive attributes mastered. This cognitive diagnostic feedback is provided to students and teachers in the form of a score report. The skills mastery profile, along with adjunct information such as exemplar test items, can be used by the teacher to focus instructional efforts in areas where the student is requiring additional assistance. Assessment results can also provide feedback to the teacher on the effectiveness of instruction for promoting the learning objectives. The AHM is a promising method for cognitive diagnostic assessment. Using a principled test design approach, integrating cognition into test development, can promote stronger inferences about how students actually think and solve problems. With this knowledge, students can be provided with additional information that can guide their learning, leading to improved performance on future educational assessments and problem solving tasks.Suggested reading
Leighton, J. P., & Gierl, M. J. (Eds.). (2007). Cognitive diagnostic assessment for education: Theory and applications. Cambridge, UK: Cambridge University Press.External links
References
{{reflist Educational research Educational evaluation methods Psychological tests and scales