1 Overview
The Dynamic Learning Maps® (DLM®) Alternate Assessment System assesses student achievement in English language arts (ELA), mathematics, and science for students with the most significant cognitive disabilities in grades 3–8 and high school. Due to differences in the development timeline for science, separate technical manuals were prepared for ELA and mathematics (see Dynamic Learning Maps Consortium, 2022a, 2022b). The purpose of the system is to improve academic experiences and outcomes for students with the most significant cognitive disabilities by setting high and actionable academic expectations and providing appropriate and effective supports to educators. Results from the DLM alternate assessment are intended to support interpretations about what students know and are able to do and to support inferences about student achievement in the given subject. Results provide information that can guide instructional decisions as well as information for use with state accountability programs.
The DLM System is developed and administered by Accessible Teaching, Learning, and Assessment Systems (ATLAS), a research center within the University of Kansas’s Achievement and Assessment Institute. The DLM System is based on the core belief that all students should have access to challenging, grade-level or grade-band content. Online DLM assessments give students with the most significant cognitive disabilities opportunities to demonstrate what they know in ways that traditional paper-and-pencil assessments cannot.
A complete technical manual was created after the first operational administration in 2015–2016. After each annual administration, a technical manual update is provided to summarize updated information. The current technical manual provides updates for the 2021–2022 administration. Only sections with updated information are included in this manual. For a complete description of the DLM assessment system, refer to previous technical manuals, including the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).
1.1 Current DLM Collaborators for Development and Implementation
The DLM System was initially developed by a consortium of state education agencies (SEAs) beginning in 2010 and expanding over the years, with a focus on ELA and mathematics. The development of a DLM science assessment began with a subset of the participating SEAs in 2014. Due to the differences in the development timelines, separate technical manuals are prepared for ELA and mathematics and science. During the 2021–2022 academic year, DLM assessments were available to students in 21 states: Alaska, Arkansas, Colorado, Delaware, District of Columbia, Illinois, Iowa, Kansas, Maryland, Missouri, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Oklahoma, Pennsylvania, Rhode Island, Utah, West Virginia, and Wisconsin. One SEA partner, Colorado, only administered assessments in ELA and mathematics; one SEA partner, District of Columbia, only administered assessments in science. The DLM Governance Board is comprised of two representatives from the SEAs of each member state. Representatives have expertise in special education and state assessment administration. The DLM Governance Board advises on the administration, maintenance, and enhancement of the DLM System.
In addition to ATLAS and governance board states, other key partners include the Center for Literacy and Disability Studies at the University of North Carolina at Chapel Hill and Agile Technology Solutions at the University of Kansas.
The DLM System is also supported by a Technical Advisory Committee (TAC). DLM TAC members possess decades of expertise, including in large-scale assessments, accessibility for alternate assessments, diagnostic classification modeling, and assessment validation. The DLM TAC provides advice and guidance on technical adequacy of the DLM assessments.
1.2 Student Population
The DLM System serves students with the most significant cognitive disabilities, sometimes referred to as students with extensive support needs, who are eligible to take their state’s alternate assessment based on alternate academic achievement standards. This population is, by nature, diverse in learning style, communication mode, support needs, and demographics.
Students with the most significant cognitive disabilities have a disability or multiple disabilities that significantly impact intellectual functioning and adaptive behavior. When adaptive behaviors are significantly impacted, the individual is unlikely to develop the skills to live independently and function safely in daily life. In other words, significant cognitive disabilities impact students in and out of the classroom and across life domains, not just in academic settings. The DLM System is designed for students with these significant instruction and support needs.
The DLM System provides the opportunity for students with the most significant cognitive disabilities to show what they know, rather than focusing on deficits (Nitsch, 2013). These are students for whom general education assessments, even with accommodations, are not appropriate. These students learn academic content aligned to grade-level content standards, but at reduced depth, breadth, and complexity. The content standards are derived from the Framework for K–12 Science Education: Practices, Crosscutting Concepts, and Core Ideas (Framework, National Research Council, 2012) and the Next Generation Science Standards (NGSS, NGSS Lead States, 2013) and are called Essential Elements (EEs). The EEs are the learning targets for elementary, middle school, and high school grade bands and high school Biology. Chapter 2 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017) provides a complete description of the content structures for the DLM assessment, including the EEs.
While all states provide additional interpretation and guidance to their districts, three general participation guidelines are considered for a student to be eligible for the DLM alternate assessment.
- The student has a significant cognitive disability, as evident from a review of the student records that indicates a disability or multiple disabilities that significantly impact intellectual functioning and adaptive behavior.
- The student is primarily being instructed (or taught) using the DLM EEs as content standards, as evident by the goals and instruction listed in the IEP for this student that are linked to the enrolled grade level or grade band DLM EEs and address knowledge and skills that are appropriate and challenging for this student.
- The student requires extensive direct individualized instruction and substantial supports to achieve measurable gains in the grade-and age-appropriate curriculum. The student (a) requires extensive, repeated, individualized instruction and support that is not of a temporary or transient nature and (b) uses substantially adapted materials and individualized methods of accessing information in alternative ways to acquire, maintain, generalize, demonstrate and transfer skills across multiple settings.
The DLM System eligibility criteria also provide guidance on specific considerations that are not acceptable for determining student participation in the alternate assessment:
- a disability category or label
- poor attendance or extended absences
- native language, social, cultural, or economic differences
- expected poor performance on the general education assessment
- receipt of academic or other services
- educational environment or instructional setting
- percent of time receiving special education
- English Language Learner status
- low reading or achievement level
- anticipated disruptive behavior
- impact of student scores on accountability system
- administrator decision
- anticipated emotional duress
- need for accessibility supports (e.g., assistive technology) to participate in assessment
1.3 Assessment
The DLM science assessment is based on EEs for science. The EEs are based on the general education grade-banded content standards but exhibit reduced depth, breadth, and complexity. They link the general education content standards to grade band expectations that are at an appropriate level of rigor and challenge for students with significant cognitive disabilities. The EEs specify the academic content standards and delineate three levels of cognitive complexity: Initial, Precursor, and Target. These levels represent knowledge, skills, and understandings in science that support a progression toward mastery associated with the grade band content standards. Assessment design is based on three key relationships between system elements (see Figure 1.1):
- Content standards (Framework, NGSS) and the DLM science EEs for each grade band
- An EE and its associated linkage levels
- Linkage levels and assessment items.
These relationships are further explained in Chapter 3 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).
For all aspects of the DLM System, our overarching goal is to align with the latest research from a full range of accessibility lenses (e.g., universal design of assessment, physical and sensory disabilities, special education) to ensure the assessments are accessible for the widest range of students who will be interacting with the content. In order to exhibit the assessed skills, students must be able to interact with the assessment in the means most appropriate for them. Thus, the DLM assessments provide different means of student interaction and ensure those means can be used while maintaining the validity of the inferences from and intended uses of the DLM System. These pathways begin in the earliest stages of assessment and content development, from item writing to assessment administration. We seek both content adherence and accessibility by maximizing the quality of the assessment process while preserving evidence of the targeted cognition. Ensuring accessibility for all students supports the validity of the intended uses. The overarching goal of accessible content is reflected in the Theory of Action for the DLM System, which is described in the following section.
1.4 Theory of Action and Interpretive Argument
The Theory of Action that guided the design of the DLM System for science was similar to the Theory of Action for the ELA and mathematics assessments, which was formulated in 2011, revised in December 2013, and revised again in 2019. It expresses the belief that high expectations for students with the most significant cognitive disabilities, combined with appropriate educational supports and diagnostic tools for educators, results in improved academic experiences and outcomes for students and educators.
The process of articulating the Theory of Action started with identifying critical problems that characterize large-scale assessment of students with the most significant cognitive disabilities so that the DLM System design could alleviate these problems. For example, traditional assessment models treat knowledge as unidimensional and are independent of teaching and learning, yet teaching and learning are multidimensional activities and are central to strong educational systems. Also, traditional assessments focus on standardized methods and do not allow various, non-linear approaches for demonstrating learning even though students learn in various and non-linear ways. In addition, using assessments for accountability pressures educators to use assessments as models for instruction with assessment preparation replacing best-practice instruction. Furthermore, traditional assessment systems often emphasize objectivity and reliability over fairness and validity. Finally, negative, unintended consequences for students must be addressed and eradicated.
The DLM Theory of Action expresses a commitment to provide students with the most significant cognitive disabilities access to an assessment system that is capable of validly and reliably evaluating their achievement. Ultimately, students will make progress toward higher expectations, educators will make instructional decisions based on data, educators will hold higher expectations of students, and state and district education agencies will use results for monitoring and resource allocation.
The DLM Governance Board adopted an argument-based approach to assessment validation. The validation process began in 2013 by defining with governance board members the policy uses of DLM assessment results. We followed this with a three-tiered approach, which included specification of 1) a Theory of Action defining statements in the validity argument that must be in place to achieve the goals of the system; 2) an interpretive argument defining propositions that must be evaluated to support each statement in the Theory of Action; and 3) validity studies to evaluate each proposition in the interpretive argument.
After identifying these overall guiding principles and anticipated outcomes, specific elements of the DLM Theory of Action were articulated to inform assessment design and to highlight the associated validity arguments. The Theory of Action includes the assessment’s intended effects (long-term outcomes), statements related to design, delivery and scoring, and action mechanisms (i.e., connections between the statements; see Figure 1.2). The chain of reasoning in the Theory of Action is demonstrated broadly by the order of the four sections from left to right. Dashed lines represent connections that are present when the optional instructionally embedded assessments are utilized. Design statements serve as inputs to delivery, which informs scoring and reporting, which collectively lead to the long-term outcomes for various stakeholders. The chain of reasoning is made explicit by the numbered arrows between the statements.
1.5 Key Features
Consistent with the Theory of Action, key elements were identified to guide the design of the DLM science alternate assessment. The list of key elements below mirrors the organization of this manual and provides chapter references.
A set of particularly important learning targets most frequently addressed in DLM science states that serve as grade band content standards for students with significant cognitive disabilities and provide an organizational structure for educators.
The selection of learning targets is crucial to instruction and assessment development; educators must be able to build the knowledge, skills, and understandings required to achieve the content standard expectations for each grade band and subject. This forms a local learning progression toward a specific learning target. The process for selecting learning targets and developing EEs with three linkage levels for assessment are described in Chapter 2 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).
Instructionally relevant testlets that engage the student in science tasks and reinforce learning.
Instructionally relevant assessments consist of activities an educator could use as a springboard for designing instructional activities combined with the systematic gathering and analysis of data. These assessments necessarily take different forms depending on the population of students and the concepts being taught. The development of an instructionally relevant assessment begins by creating items using principles of evidence-centered design and Universal Design for Learning (UDL), then linking related items together into meaningful groups, which the DLM System calls testlets. Item and testlet design are described in Chapter 3 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).
Adaptive assessments that reinforce academic expectations.
The DLM science alternate assessment is designed as an adaptive, computer-delivered assessment that is intended to measure knowledge, skills, and understandings at appropriate levels of complexity for the content. It consists of an end-of-year assessment that meets the requirements of accountability systems and provides detailed descriptions of what students know and can do. Assessment administration is described in Chapter 4 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).
Accessibility by design and alternate testlets.
Accessibility is a prerequisite to validity or the degree to which an assessment score interpretation is justifiable for a particular purpose and supported by evidence and theory (American Educational Research Association et al., 2014). Therefore, throughout all phases of development, the DLM System was designed with accessibility in mind to provide multiple means of representation, expression, action, and engagement. Students must understand what is being asked in an item or task and have the tools to respond in order to demonstrate what they know and can do (Karvonen et al., 2015). The DLM alternate assessment provides accessible content, accessible delivery via technology, and adaptive routing. Since all students taking an alternate assessment based on alternate academic achievement standards are students with SCD, accessibility supports are universally available. The emphasis is on selecting the appropriate accessibility features and tools for each individual student. Accessibility considerations are described in Chapter 2 (linkage levels), Chapter 3 (testlet development), and Chapter 4 (accessibility during assessment administration) of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).
Status and score reporting that is readily actionable.
Due to the unique characteristics of a mastery-based system, DLM assessments require new approaches to psychometric analysis and modeling, with the goal of assuring accurate inferences about student performance relative to the content as it is organized in the EEs and linkage levels. Each EE is designed to address three levels of complexity, called linkage levels. Diagnostic classification modeling is used to determine a student’s likelihood of mastering assessed linkage levels associated with each EE. Providing student mastery information at the linkage level allows for instructional next steps to be readily derived. A student’s overall performance level in the subject is determined by aggregating linkage level mastery information across EEs. This scoring model supports reports that can be immediately used to guide instruction and describe levels of mastery. The DLM modeling approach is described in Chapter 5 of this manual, and score report design is described in Chapter 7 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).
1.6 Technical Manual Overview
This manual provides evidence collected during the 2021–2022 administration of science assessments.
Chapter 1 provides an overview of the theoretical underpinnings of the DLM science assessment, including a description of the DLM collaborators, the target student population, an overview of the assessment, an introduction to the Theory of Action and interpretive argument, and a summary of contents of the remaining chapters.
Chapter 2 was not updated for 2021–2022. For a full description of the process by which the EEs were developed, including the intended coverage with the Framework (National Research Council, 2012) and the NGSS (NGSS Lead States, 2013), see the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).
Chapter 3 outlines assessment design and development, including a description of test development activities and external review of content. The chapter then presents evidence of item quality including field test items, operational item data, and differential item functioning.
Chapter 4 describes assessment delivery including updated procedures and data collected in 2021–2022. The chapter presents evidence from the DLM System, including administration time, device usage, adaptive routing, and accessibility support selections; evidence from monitoring assessment administration, including test administration observations and data forensics monitoring; and evidence from test administrators, including user experience with the DLM System and student opportunity to learn.
Chapter 5 describes the updated psychometric model which was implemented in 2021–2022. The chapter demonstrates how the DLM project draws upon a well-established research base in cognition and learning theory and uses psychometric methods that provide feedback about student progress and learning acquisition. This chapter describes the psychometric model that underlies the DLM project and describes the process used to estimate item and student parameters from student test data and evaluate model fit.
Chapter 6 was not updated for 2021–2022; no changes were made to the cut points used in scoring DLM assessments. See the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017) for a description of the methods, preparations, procedures, and results of the original standard-setting meeting and the follow-up evaluation of the impact data. For a description of the changes made to the cut points used in scoring DLM assessments for grade 3 and grade 7 during the 2018–2019 administration, see the 2018–2019 Technical Manual Update—Science (Dynamic Learning Maps Consortium, 2019).
Chapter 7 reports the 2021–2022 operational results, including student participation data. The chapter details the percentage of students achieving at each performance level; subgroup performance by gender, race, ethnicity, and English-learner status; and the percentage of students who showed mastery at each linkage level. Finally, the chapter provides descriptions of changes to score reports and data files during the 2021–2022 administration.
Chapter 8 focuses on reliability evidence and describes the updated simulated retest method for assessing consistency in student results. This includes a description of the methods used to evaluate assessment reliability and a summary of results by the linkage level, EE, domain or topic, and subject (overall performance).
Chapter 9 describes updates to the professional development offered across states administering DLM assessments in 2021–2022, including participation rates and evaluation results.
Chapter 10 synthesizes the evidence provided in the previous chapters. It evaluates how the evidence supports the claims in the Theory of Action as well as the long-term outcomes. The chapter ends with a description of our future research and ongoing initiatives for continuous improvement.