Dynamic Learning Maps
Acknowledgements
1
Overview
1.1
Current DLM Collaborators for Development and Implementation
1.2
Student Population
1.3
Assessment
1.4
Theory of Action and Interpretive Argument
1.5
Key Features
1.6
Technical Manual Overview
2
Essential Element Development
3
Assessment Design and Development
3.1
Test Development Procedures
3.1.1
Testlet and Item Writing
3.1.2
External Reviews
3.2
Evidence of Item Quality
3.2.1
Field Testing
3.2.2
Operational Assessment Items for 2021–2022
3.2.3
Evaluation of Item-Level Bias
3.3
Conclusion
4
Assessment Delivery
4.1
Key Features of the Science Assessment Model
4.1.1
Assessment Administration Windows
4.2
Evidence from the DLM System
4.2.1
Administration Time
4.2.2
Device Usage
4.2.3
Blueprint Coverage
4.2.4
Adaptive Delivery
4.2.5
Administration Incidents
4.2.6
Accessibility Support Selections
4.3
Evidence From Monitoring Assessment Administration
4.3.1
Test Administration Observations
4.3.2
Data Forensics Monitoring
4.4
Evidence From Test Administrators
4.4.1
User Experience With the DLM System
4.4.2
Opportunity to Learn
4.5
Conclusion
5
Modeling
5.1
Psychometric Background
5.2
Essential Elements and Linkage Levels
5.3
Overview of the DLM Modeling Approach
5.3.1
Model Specification
5.3.2
Model Calibration
5.3.3
Estimation of Student Mastery Probabilities
5.4
Model Evaluation
5.4.1
Model Fit
5.4.2
Classification Accuracy
5.5
Calibrated Parameters
5.5.1
Probability of Masters Providing Correct Response
5.5.2
Probability of Nonmasters Providing Correct Response
5.5.3
Item Discrimination
5.5.4
Base Rate Probability of Class Membership
5.6
Conclusion
6
Standard Setting
7
Reporting and Results
7.1
Student Participation
7.2
Student Performance
7.2.1
Overall Performance
7.2.2
Subgroup Performance
7.3
Mastery Results
7.3.1
Mastery Status Assignment
7.3.2
Linkage Level Mastery
7.4
Data Files
7.5
Score Reports
7.5.1
Individual Student Score Reports
7.6
Quality-Control Procedures for Data Files and Score Reports
7.7
Conclusion
8
Reliability
8.1
Background Information on Reliability Methods
8.2
Methods of Obtaining Reliability Evidence
8.2.1
Reliability Sampling Procedure
8.3
Reliability Evidence
8.3.1
Linkage Level Reliability Evidence
8.3.2
Conditional Reliability Evidence by Linkage Level
8.3.3
Essential Element Reliability Evidence
8.3.4
Domain and Topic Reliability Evidence
8.3.5
Subject Reliability Evidence
8.3.6
Performance Level Reliability Evidence
8.4
Conclusion
9
Training and Professional Development
9.1
Updates to Required Test Adminsitrator Training
9.2
Instructional Professional Development
9.2.1
Professional Development Participation and Evaluation
9.3
Conclusion
10
Validity Evidence
10.1
Validity Evidence Summary
10.2
Continuous Improvement
10.2.1
Improvements to the Assessment System
10.2.2
Future Research
11
References
Appendix
A
Supplemental Information About Assessment Design and Development
A.1
Differential Item Functioning Plots
A.1.1
Uniform Model
A.1.2
Combined Model
2021–2022 Technical Manual Update
2021–2022 Technical Manual Update
Science
December 2022
Placeholder Title
Copyright © 2022 Accessible Teaching, Learning, and Assessment Systems (ATLAS)