As described in previous sections, the curriculum frameworks are built upon the foundation of rigorous standards and high-quality curriculum materials. Section 3 discussed how this foundation informs high-quality instruction. This section focuses on how it should also ensure high-quality learning through assessment. When properly designed and implemented, a comprehensive assessment system provides multiple perspectives and sources of data to help educators understand the full range of student achievement. Assessment information may be used to evaluate educational programs and practices and make informed decisions related to curriculum, instruction, intervention, professional learning, and the allocation of resources to better meet students’ needs.
Assessment information also informs educators and families on student performance and their relationship to ongoing instructional practice. Various types of assessments are required because they provide different types of information regarding performance. A comprehensive assessment system must be appropriate for the student population and address the assessment needs of students at all grade levels, including those who speak languages other than English, are differently-abled, who struggle, or who excel. Most multilingual learners and differently-abled students participate in typical statewide and classroom-based assessment systems for ELA/Literacy.
Student learning is most maximized with an aligned system of standards, curriculum, instruction, and assessment. When assessment is aligned with instruction, both students and teachers benefit. Students are more likely to learn because instruction is focused and because they are assessed on what they are taught. Teachers are also able to focus, making the best use of their time. Assessments are only useful if they provide information that is used to support and improve student learning.
Assessment inspires us to ask these hard questions:
- "Are we teaching what we think we are teaching?"
- "Are students learning what we want them to learn?"
- "Is there a way to teach the subject and student better, thereby promoting better learning?"
Section 4 will orient you to the purposes and types of assessment, the concepts of validity, reliability, and fairness in assessment, factors to consider when selecting or developing assessments, and considerations when assessing differently-abled students or multilingual learners.
Assessment has an important and varied role in public education. Assessments are used to inform parents about their children’s progress and overall achievement. Teachers use assessment to make decisions about instruction, assign grades, and determine eligibility for special services and program placement. They are used by evaluators to measure program and instructional effectiveness. They are also used to track progress toward school and LEA goals set by the state in accordance with federal regulations. When it comes to assessment of student learning, the why should precede the how because assessments should be designed and administered with the purpose in mind. The vast majority of assessments are used for one of three general purposes: to inform and improve instruction, to screen/identify (for interventions), and to measure outcomes.
When assessments are used to inform instruction, the data typically remain internal to the classroom. They are used to provide specific and ongoing information on a student’s progress, strengths, and weaknesses, which can be used by teachers to plan and/or differentiate daily instruction. This daily process is most typically referred to as formative assessment. However, interim and summative assessments can also be used to impact instructional decision-making, though not in the short-cycle timeline that characterizes formative assessments. Assessments such as unit tests and even state assessment data can be used to reflect on and inform future instructional decisions.
When assessments are used to screen/identify, the data also typically remain internal to the school or LEA. Assessments that are used primarily to screen are administered to the total population of students and generally assess key skills that are indicators of students’ larger skill set, rather than an in-depth analysis of the standards. They should be relatively quick to administer and easy to score. Assessments used for screening purposes can inform decisions about the placement of groups of students within an academic program structure or individual students’ needs for academic interventions or special programs. When needed, screening assessments are followed by diagnostic assessments to determine if more targeted intervention is necessary or if a student has a disability.
Finally, when assessments are used to measure outcomes, data are communicated to parties external to the classroom. Whether it is a unit test that is entered into a grade book and communicated to parents or a standardized test that is reported to the State. Assessments used to measure outcomes attempt to measure what has been learned so that it can be quantified and reported. No single type of assessment, and certainly no single assessment, can serve all purposes.
From informal questioning to final exams, there are countless ways teachers may determine what students know, understand, and are able to do. The instruction cycle generally follows a pattern of determining where students are with respect to the standards being taught before instruction begins, monitoring their progress as the instruction unfolds, and then determining what knowledge and skills are learned as a result of instruction. Assessments, based on when they are administered relative to instruction, can be categorized as formative, summative, or interim.
The primary purpose of formative assessment is to inform instruction. As an instructional practice, it is described more fully in Section 3 of this framework. The Chief Council of State School Officers (CCSSO, 2018) updated its definition of formative assessment in 2021 and defines formative assessment in the following way:
Formative assessment is a planned, ongoing process used by all students and teachers during learning and teaching to elicit and use evidence of student learning to improve student understanding of intended disciplinary learning outcomes and support students to become self-directed learners.
Effective use of the formative assessment process requires students and teachers to integrate and embed the following practices in a collaborative and respectful classroom environment:
- Clarifying learning goals and success criteria within a broader progression of learning;
- Eliciting and analyzing evidence of student thinking;
- Engaging in self-assessment and peer feedback;
- Providing actionable feedback; and
- Using evidence and feedback to move learning forward by adjusting learning strategies, goals, or next instructional steps.
Additionally, formative assessment is integrated throughout instruction with the purpose of gathering evidence to adjust teaching, often in real time, to address student needs (Black and Wiliam, 2010), and capitalize on student strengths. There is ample evidence to support that this process produces “significant and often substantial learning gains” (Black and Wiliam, 2010) and these gains are often most pronounced for low-achieving students. Eliciting evidence of student thinking as part of the formative assessment process should take varied forms. Examples of strategies for gathering evidence of learning during the formative assessment process include exit slips, student checklists, one-sentence summaries, misconception checks (Alber, 2014), targeted questioning sequence, conferences, and observations.
Formative assessment becomes particularly powerful when it involves a component that allows for student self-assessment. When teachers clearly articulate learning goals, provide criteria for proficiency in meeting those goals, and orchestrate a classroom dialogue that unveils student understandings, students are then positioned to monitor their own learning. This self-knowledge, coupled with teacher support based on formative assessment data, can result in substantive learning gains (Black and William, 2010). Learner involvement in monitoring progress on their goals strengthens engagement for all students but is especially important for differently-abled students. Specific feedback comparing the students’ achievement against the standard — rather than only against other students — increases personal performance. With specific feedback, learners should then have the opportunity to resubmit some items in response. Opportunities for students to monitor their own progress and make improvements based on specific feedback connect to the Social Emotional Learning competency of Self-management — learning to manage and express emotions appropriately, controlling impulses, overcoming challenges, setting goals, and persevering and Self-awareness Learning Standards 1B — I can identify when help is needed and who can provide it. Self-Awareness means students understand their areas of strength as well as areas of need. This skill is strengthened as they monitor their progress. By incorporating Universal Design for Learning guidelines, assessment feedback that is relevant, constructive, accessible, specific, and timely with a focus on moving the learner toward mastery is more productive in promoting engagement. The assessment process creates a continuous feedback loop, which systematically checks for progress and identifies strengths and weaknesses to improve learning gains during instruction.
Summative assessments are formal assessments that are given after a substantial block of instructional time, for example at the end of a unit, term course, or academic year. Interim assessments are administered during instruction and depending on the type of interim assessment can be used to screen students, inform instruction, or measure outcomes. By design and purpose, high-quality summative and interim assessments are less nimble in responding to student strengths and needs than formative assessments. They provide an overall picture of achievement and can be useful in predicting student outcomes/supports or evaluating the need for pedagogical or programmatic changes. These assessments should be written to include a variety of item types (e.g., selected response, constructed response, extended response, performance tasks) and represent the full scale of Webb’s Depth of Knowledge (DOK). To maximize the potential for gathering concrete evidence of student learning as facilitated by curriculum and instruction, educators should routinely draw upon the assessments provided within their HQCMs (RIDE, 2012).
State assessments are summative assessments that are given annually and provide a valuable “snapshot” to educators and families and help us see how we are doing compared with other districts, compared with the state as a whole, and compared against several other high-performing states. State assessments only account for about 1 percent of most student’s instructional time. Results from state assessments that are part of a comprehensive assessment system keep families and the public at large informed about school, district, and state achievement and progress.
Interim assessments include screeners and diagnostic assessments. Screening assessments are a type of interim assessment used as a first alert or indication of specific instructional need and are typically quick and easy to administer to a large number of students and easy to score. Assessments used for screening purposes can inform curriculum decisions about instruction for groups of students and for individual student's academic supports. Schools and districts often use interim assessments to screen and monitor student progress across the school year.
Examples of these assessments used in schools and districts include STAR, i-Ready, NWEA, IXL, and aimsweb. Some of these screening tools also have progress monitoring capability to track a student’s response to intervention at a more frequent interval. Progress monitoring tools may be general outcome measures or mastery measures. While general outcome measures (GOMs) measure global skill automaticity, mastery measurement closely looks at one aspect or specific skill. When needed, screening assessments can be followed by more intensive diagnostic assessments to determine if targeted interventions are necessary. Diagnostic assessments are often individually administered to students who have been identified through the screening process. The diagnostic assessments help to provide greater detail of the student’s knowledge and skill.
Performance assessments/tasks can be an effective way to assess students’ learning of the standards within a high-quality curriculum. Performance assessments/tasks require students to apply understanding to complete a demonstration performance or product that can be judged on performance criteria (RIDE, 2012). Performance assessments can be designed to be formative, interim, or summative assessments of learning. They also allow for richer and more authentic assessment of learning. Educators can integrate performance assessments into instruction to provide additional learning experiences for students. Performance tasks are often included as one type of assessment in portfolios and exhibitions, such as those used as part of Rhode Island’s Proficiency Based Graduation Requirements (PBGR).
|
Inform Instruction |
Screen/Identify |
Measure Outcomes |
Summative |
Generally, not used as the primary source of data to inform instruction. May be useful in examining program effectiveness. |
Generally, not used as the primary source of data to screen/identify students. May be one of multiple sources used. |
Primary purpose is to measure outcomes (at classroom, school, LEA, or state level). Can be used for accountability, school improvement planning, evaluation, and research. |
Formative |
Primary purpose is to inform instruction. |
Generally not used to screen/identify students. |
Generally not used to measure long term outcomes; rather, it is used to measure whether students learned what was just taught before moving on to instructional “next steps”. Evidence gathered as part of the formative assessment process may inform a referral to special education and may be used to help measure short-term objectives on IEPs. |
Interim |
May be used to inform instruction. |
May be used to screen/identify students. |
May be used to measure outcomes in a longer instructional sequence (e.g., end of a unit of study or quarter, semester, MTSS intervention goal, IEP goal). May be part of a special education referral. |
What do educators need to know about validity, reliability and fairness?
Assessments must be designed and implemented to accurately collect student information. To do this they should all possess an optimal degree of
- Validity (the degree to which the assessment measures what it is supposed to measure — i.e., what is defined by the standards),
- Reliability (the consistency with which an assessment provides a picture of what a student knows and is able to do), and
- Fairness (lacks bias, is accessible, and is administered with equity) (RIDE, 2012).
In other words, within an assessment, the items must measure the standards or content. It is also critical that the assessment provide information that demonstrates an accurate reflection of student learning. Ensuring fairness is equally important within the assessment, particularly for differently-abled and multilingual learners, because lack of accessibility can impact validity. For example, an assessment may not measure what it was designed to measure if students cannot access the assessment items or stimuli due to linguistic barriers or inattention to other demonstrated learning needs.
One component of ensuring fairness is using assessments that are accessible to all students. Accessible assessment practices may include offering assessments in different modalities (e.g., Braille, oral) or languages, allowing students to respond in different modalities, or providing additional accommodations for students. Accessibility features are available for all students to ensure universal access to the assessment. To further support differently-abled students and multilingual learners, accommodations are also available on all state assessments. Accommodations refer to changes in setting, timing (including scheduling), presentation format, or response format that do not alter in any significant way what the test measures, or the comparability of the results. For example, reading a test aloud may be appropriate when a student is taking a history assessment, but would not be appropriate to assess a student’s decoding ability. When used properly, accessibility features and appropriate test accommodations remove barriers to participation in the assessment and provide students with diverse learning needs an equitable opportunity to demonstrate their knowledge and skills.
To ensure language access for MLLs, universal accessibility features and accommodations can be leveraged during administration of assessments, in a manner consistent with Rhode Island State Assessment Program policy. For example, breaks and familiar test administrators are available to MLLs on all statewide assessments except Pre-SAT/SAT. For additional information about accessibility features, please see RIDE’s Accommodations and Accessibility Features Manual. Accommodations are also available to MLLs on all statewide assessments. Examples of accommodations include bilingual dictionaries, reading aloud the test directions in the student’s native language, and Spanish editions of math and science assessments. A full list of accommodations available to MLLs on each state assessment is available in RIDE’s Accommodations and Accessibility Features Manual.
For both MLLs and DAS, assessment accommodations should reflect instructional accommodations used on a regular basis with a student. Educators evaluate the effectiveness of accommodations through data collection and the consideration of the following questions:
- Did the student use the accommodation consistently?
- Did the accommodation allow the student to access or demonstrate learning as well as their peers?
- Did the accommodation allow the student to feel like a member of the class?
- Did the student like using the accommodation?
Most students with IEPs participate in regular statewide assessments with accommodations as outlined in the IEP. DAS who receive testing accommodations must take the same statewide assessment as peers without IEPs. IEP team members collaborate to select accommodations based on educational needs demonstrated by current data, not based on placement or disability category. All students with disabilities should be included in educational accountability systems and a small percentage (~1%) of students with significant cognitive impairments participate in alternate state assessment. Educators should engage students and families in decisions about appropriate testing accommodations or participation in alternate assessments (i.e., DLM and Alternate ACCESS).
IDEA also speaks to accommodations on district assessments as well as statewide assessments. According to IDEA Sec. 300.320(a)(6), each child’s IEP must include a statement of any individual appropriate accommodations that are necessary to measure the academic achievement and functional performance of the child on state and districtwide assessments consistent with section 612(a)(16) of the Act. When determining accommodations for district assessments, IEP teams, including the general educator, must consider the difference between target skills (the knowledge or skills being assessed) and access skills (needed to complete the assessment, but not specifically being measured) along with data on the strengths and needs of the individual student.
Another component for ensuring fairness is making sure the items do not include any bias in content or language that may disadvantage some students. For example, when assessing multilingual learners, it is important to use vocabulary that is widely accessible to students and avoid colloquial and idiomatic expressions and/or words with multiple meanings when it is not pertinent to what you are measuring. Whenever possible, use familiar contexts or objects like classroom or school experiences rather than ones that are outside of school that may or may not be familiar to all students. Keep sentence structures as simple as is possible while expressing the intended meaning.
Even with valid, reliable, and fair assessments, it is important for educators to consider multiple data points to ensure that they have a comprehensive understanding of student strengths and needs, especially when supporting DAS and MLLs. In addition to interim and diagnostic assessment, sources of information can range from observations, work samples, and curriculum-based measurement to functional behavioral assessments and parent input. These data points should be gathered within the core curriculum by general educators, rather than only by those providing specialized services, because data should guide daily decisions about instruction within general education. Multiple sources of information help educators collaborate to develop a comprehensive learner profile of strengths and needs. Educators can analyze the learning environment against that profile to identify necessary scaffolds and accommodations to remove barriers for DAS. Multiple sources of data are also important, seeing as language access can impact student data from content assessments in English.
Building or refining a comprehensive assessment system begins by agreeing upon the purposes of the assessments the LEA will administer. One assessment cannot answer every question about student learning. Each type of assessment has a role in a comprehensive assessment system. The goal is not to have some ― or enough ― of each type; rather it is to understand that each type of assessment has a purpose and, when used effectively, can provide important information to further student learning. Some questions educator teams may ask themselves as part of any discussion of purpose include:
- “What do we want to know about student learning of the standards?”
- “What do we want to learn about students’ skills and knowledge?”
- “What data do we need to answer those questions?”
Once claims and needs are identified, the appropriate assessments are selected to fulfill those data needs by asking: “Which assessment best serves our purpose?” For example, if a teacher wants to know if students learned the material just taught and identify where they may be struggling to adjust the next day's instruction, the teacher may give a short quiz which asks students a few questions targeting a specific skill. Whereas, if the teacher wanted to know if the students were proficient with the content taught during the first semester, the teacher may ask students to complete a longer test or performance task where students apply their new learning, thus measuring multiple standards/skills.
In addition to considering what purpose an assessment will serve, attention must be paid to the alignment of the assessment with the curriculum being used by the LEA. Curriculum materials embed assessments as part of the package provided to educators. In turn, educators must consider whether the assessments included meet the breadth of purposes and types needed for an assessment system that informs instruction and provides information about student learning. A good starting place is to review what assessments are available within the high-quality instructional materials, identify gaps and weaknesses, and develop a plan for which additional assessments may need to be purchased or developed. Remember any review of assessments needed involves a close use of the standards and universal design guidelines. Providing options in the way assessments are represented and allowing for students to demonstrate their understanding through multiple means of action and expression benefits all students, especially MLLs and DAS.
Assessments that are not adequately aligned with the LEA’s adopted curriculum and universal design are not accurate indicators of student learning. This is especially important when assessment data are used in high-stakes decision-making, such as student promotion or graduation. Because every assessment has its limitations, it is preferable to use data from multiple assessments and types of assessments. By collecting data from multiple sources, one can feel more confident in inferences drawn from such data. When curriculum, instruction, and assessment are carefully aligned and working together, student learning is maximized.
Finally, when developing or selecting assessments, knowing whether an assessment is a good fit for your needs requires a basic understanding of item types and assessment methods and their respective features, advantages, and disadvantages. Though this is certainly not an exhaustive list, a few of the most common item types and assessment methods include selected response, constructed response, performance tasks, and observations/interviews. See Comprehensive Assessment System: Rhode Island Criteria and Guidance (2012) for a discussion of the advantages and disadvantages of each method.
Facets of a Comprehensive Assessment System in Reading
A comprehensive system of assessment in reading involves several different types of assessments for determining the effectiveness of the instruction, the progress the student is making, and the need for and direction of additional interventions and supports to ensure that a student is able to maintain grade-level progress. Districts and schools should begin with their high-quality instructional materials and identifying the types of assessments available within the materials. Utilizing the high-quality instructional materials resources is a critical component of a comprehensive assessment system in literacy. The following describes various categories of reading assessments and the kinds of information they provide.
Classroom Instructional Assessments: Reading
Screening Assessments. A type of interim assessment:
- used as a first alert or indication of being at-risk for reading below grade level
- administered to all students before instruction
- quick and easy to administer to a large number of students and correlated with end-of-year achievement tests
- rarely provide the specific information needed to determine the most appropriate intervention or target for instruction
- all essential components of reading may not be included within any given grade level’s screening assessment. However, to make informed decisions on a student’s proficiency in reading, ample data must be collected. Therefore, a screening assessment should include, at a minimum, two of the components that influence reading proficiency.
Key questions that screening assessments should answer:
- Which student is experiencing reading difficulty?
- Which student is at risk for reading difficulty and in need of further diagnostic assessments and/or additional interventions?
Literacy/Dyslexia Screening Expectation
All students should be screened every year to determine support for students as needed, per the PLP Guidelines and RI High School Regulations. Universal literacy screening should be administered to all students to determine early risk of future reading difficulties. A preventative approach should be used to ensure student risk is revealed early on, when intervention is most effective. If a student scores low on these screeners, additional assessments should be administered to determine a student’s potential risk for dyslexia, a neurobiological weakness in phonological and orthographic processing. Screeners should include measures of Rapid Automatic Naming (RAN), phonemic awareness, real and pseudo word reading, as well as vocabulary and syntactic awareness, which have implications on prosody, fluency, and ultimately comprehension.
For additional guidance, including screening guidance by grade level.
Examples of Screening Assessments and Early Literacy Screening Assessments.
Benchmark Assessments. A type of interim assessment:
- administered to all students
- used to chart growth in reading
- used to determine if students are making adequate progress in overall performance towards standard(s)
- typically administered at a predetermined time (e.g., at the end of a unit/theme, quarterly)
Key questions that benchmark assessments should answer:
- What is the effectiveness of classroom instruction?
- How should groups be formed for classroom reading instruction?
- Which students need extra support or enrichment to acquire a particular reading skill or standard?
- Which specific reading skills need to be emphasized or re-taught?
Progress Monitoring. A type of formative or interim assessment:
- used to determine next steps
- used during classroom reading instruction (may occur daily or weekly)
- aligned to instructional objectives
- can be used on an ongoing basis and may include teacher-made assessments, book logs, work samples, anecdotal records, and standardized or semi-structured measures of student performance, such as analysis and observational notes of student learning
Key questions that progress monitoring assessments should answer:
- How does the data articulate whether a student “got it?”
- Does the lesson need to be re-taught to the whole class or to just a few students?
- Who needs extra support or enrichment?
- How is the specific, constructive, and timely feedback that is provided to students promoting student learning (or relearning) of reading skills/standards?
Outcome Assessment. A type of summative assessment:
- used as a program or student evaluation in reading
- used to indicate a student’s learning over a period of time and to show how proficient a student is towards meeting the grade-level standards in reading
Key questions that outcome assessments should answer:
- To what degree has the student achieved the reading content standards?
- Is the assessment aligned to the state-adopted reading standards?
- What information/data is provided and may be used to evaluate the effectiveness of the reading curriculum?
- Can decisions about selection and utilization of resources, materials, and personnel be made with data collected from this reading assessment?
Intervention
Diagnostic Assessment. A type of interim assessment:
- used to gain an in-depth view of a student’s reading profile
- administered to students who have already been identified as being at risk of reading below grade level during the screening process
- often are individually administered so observations of behaviors can also be included
Diagnostic assessments are used to determine specific areas of need and may not include all essential components of reading. However, a comprehensive assessment system must include a variety of assessments that address all essential components of reading for educators to use as needed.
Key questions that diagnostic assessments should answer:
- What are a student’s strengths in reading?
- What are a student’s weaknesses in reading?
- Which components of reading (e.g., fluency, phonemic awareness, phonics, text comprehension, and/or vocabulary) are problematic for the student?
- Are other students exhibiting similar reading profiles?
- How should reading intervention groups be formed?
Examples of Diagnostic Assessments
Progress Monitoring of Intervention. A type of formative or interim assessment:
- used to chart rate of growth towards benchmark/goal/standard
- used for students who have intervention services in reading
Key questions that a progress monitoring assessment used with a method of intervention should answer:
- Has this intervention been proven effective in improving students’ literacy skills?
- Is the individual student progressing at a sufficient rate to achieve the goal?
- Are instructional revisions needed for the student to make sufficient progress toward the student’s goal/standard?
Examples of Progress Monitoring Assessments
Classroom Instructional Assessments: Writing
Writing requires the coordination of multiple skills and abilities, including the ability to organize, establish purpose/focus, elaborate, choose and maintain a consistent voice, select appropriate words, structure effective sentences, spell, plan, revise, etc. “To address each of these aspects instructionally, educators need an assessment plan that is comprehensive and meets the varied needs of students” (Olinghouse, 2009).
Assessments for writing may be used for a variety of purposes (e.g., providing assistance to students, assigning a grade, determining proficiency, placing students in instructional groups or courses, and evaluating writing curricula/programs). The National Council of Teachers of English (NCTE) believes that the primary purpose of assessment is to improve teaching and learning (2014). Consequently, the goal of assessing students’ writing should always be just that: refining instruction and improving student learning.
Writing assessments must reflect the social nature of writing and its recursive process, while also considering that each piece of writing has a specific purpose, audience, and task. Due to the variety of genres of writing, the skills associated with each, the diverse audiences, and various purposes for writing (entertain, persuade, inform), the evaluation of a student’s overall writing ability should be based on multiple measures.
Students should be able to demonstrate what they do well in writing. Assessment criteria should match the particular kind of writing being created and its purpose. These criteria should be directly linked to standards that are clearly communicated to students in advance so that students can be guided by the criteria while writing.
Educators need to understand the following in order to develop a system for assessing writing:
- how to find out what students can do when they write informally and on an ongoing basis
- how to use that assessment to decide how and what to teach next
- how to assess in order to form judgments about the quality of student writing and learning
- how to assess ability and knowledge across varied writing engagements
- what are the features of good writing
- what are the elements of a constructive writing process
- what growth in writing looks like — the developmental aspects of writing
- how to deliver useful feedback, appropriate for the writer and situation
- how to analyze writing tasks/situations for their most essential elements (so that the assessment is not everything about writing all at once but rather targeted to objectives)
- how to analyze and interpret both qualitative and quantitative writing assessments
- how to use a portfolio to assist writers in their development
- how self-assessment and reflection contribute to a writer’s development
- when determining proficiency in writing, multiple student writing samples should be reviewed from various genres and for diverse audiences, tasks, and purposes
(Adapted from Newkirk & Kent, 2007)
References
Olinghouse, N.G (2009). Writing assessment for struggling learners. Perspectives in Language and Literacy, Summer, 15–18.
The National Council of Teachers of English (NTCE). (2014). Writing Assessment: A position statement. Retrieved from https://ncte.org/statement/writingassessment/
In addition to selecting and designing appropriate assessments, it is critical that educators use sound assessment practices to support MLLs and DAS during core instruction. Assessments offers valuable insight into MLL and DAS learning, and educators should use this data to plan and implement high-quality instruction. Through formative assessment, educators of mathematics play a central role in providing feedback to MLLs on content and disciplinary language development and DAS on progress towards IEP goals.
As with academic content, a comprehensive assessment system is essential for monitoring the language development of MLLs. To assess English language proficiency, RIDE has adopted ACCESS for ELs as its statewide summative assessment. However, students cannot acquire a second language in a single block of the school day. Thus, it is imperative that educators and administrators develop systems for conducting ongoing formative assessments content driven language instruction. Formative assessment processes should take place within ELA/Literacy and will focus on MLLs’ content-based language development. This approach aligns to WIDA ELD Standards Framework as well as the Blueprint for MLL Success, both of which explicitly call for disciplinary language teaching within the core content areas
The same integration of evidence-based assessment practices for DAS is needed within the general education curriculum. Seventy percent of RI students with IEPs are in general education settings at least 80% of their day. IEP goals are meant to measure and improve student progress within the general education curriculum. The specially-designed instruction is typically not happening separately, but in connection with the classroom instruction and curriculum. The general educator and special educator work in consultation to use classroom data to measure progress on an IEP goal along with any additional measures indicated in the IEP.
DAS may benefit from data-based individualization (DBI) to improve their progress in the general education curriculum. DBI is an iterative, problem-solving process that involves the analysis of progress-monitoring and diagnostic assessment data. Diagnostic data from tools such as standardized measures, error analysis of progress-monitoring data and work samples, or functional behavioral assessments (FBA) are collected and analyzed to identify the specific skill deficits that need to be targeted. The results of the diagnostic assessment, in combination with the teacher’s analysis of what features of instruction need to be adjusted to better support the student, help staff determine how to individualize the student’s instructional program to meet the individual student’s unique needs and promote progress in the general education curriculum. The diagnostic process allows teachers to identify a student’s specific area(s) of difficulty when lack of progress is evident and can inform decisions about how to adapt the intervention (National Center on Intensive Intervention, 2013).
Assessment to Support MLLs in High-Quality Core Instruction
The 2020 Edition of the WIDA ELD Standards Framework is different from previous iterations in that it contains proficiency level descriptors by grade level cluster to support developmentally appropriate, content-driven language learning. Educators of mathematics should draw on these proficiency level descriptors to design or amplify formative assessments tracking MLLs’ language development in mathematics.
As with the formative assessment process in academic content, establishing clear learning goals is the first step in improving student understanding of intended content-based language outcomes. To use the proficiency level descriptors, educators must determine the mode of communication (i.e., whether they are assessing interpretative or expressive language) and select the corresponding set of descriptors. This determination will likely be made when the educator identifies the language goals. Expressive language refers to speaking, writing, and representing, whereas interpretative language includes listening, reading, and viewing.

Image Source: 2020 Edition of WIDA ELD Standards Framework
The proficiency level descriptors should serve as a key resource to educators when refining language goals for assessment purposes, as the proficiency level descriptors highlight characteristics of language proficiency at each level. These descriptors are organized according to their discourse, sentence, and word dimensions. At the discourse level, as shown in the following table, the 2020 Edition distinguishes between language features that contribute to organization, cohesion, or density.

Image Source: 2020 Edition of WIDA ELD Standards Framework
During formative assessments, educators will not likely draw on all dimensions of language at once for assessment purposes. For instance, an exit ticket that asks students to produce two to three sentences would not be an appropriate language sample for assessing progress on organization of language. To adequately assess this discourse-level dimension of language, students would need authentic opportunities to demonstrate proficiency. An assessment item that calls for less than one paragraph or extended oral remarks, therefore, may not suffice for this purpose.
Rather than creating separate assessments to monitor progress towards disciplinary language development, educators should aim to augment assessments that are already part of their local core curricula. For example, multiple modalities could be incorporated into existing content assessments, allowing students to orally explain how they arrived at a particular solution or claim. This practice of amplifying existing materials with additional modalities aligns with UDL guidelines by providing multiple means of representation (perception, language, and symbols) and multiple means for students to demonstrate their understanding (physical action, expression, and communication) — a critical design element for MLLs who need daily explicit speaking, listening, reading, and writing instruction.
Assessment to Support Differently-Abled Students in High-Quality Core Instruction
Differently-abled students are best supported when general and special educators use Universal Design for Learning to collaboratively design and plan assessments aligned to clear learning goals to ensure they measure the intended goals of the learning experience. Flexibility in assessment options will support learners in demonstrating their knowledge. All learners can benefit from practice assessments, review guides, flexible timing, assistive technologies, or support resources and help reduce the barriers that do not change the learning goals being measured. In addition to improving access, flexible assessment options may decrease perceived threats or distractions so that learners can demonstrate their skills and knowledge. For example, a student with specific support needs for fine motor skills may be more able to participate in demonstrating knowledge of how to make a square when given the opportunity to drag and drop line segments in a technology tool rather than use a pencil on paper or a marker on a white board.
Educators can use high-leverage practices (HLPs) to leverage student learning across the content areas, grade levels, and various learner abilities. The HLPs contain specific evidence-based practices in four domains: Instruction, Assessment, Collaboration, and SEL.
High-leverage practice #6, on the use of student assessment data to analyze instructional practices and make necessary adjustments that improve student outcomes, highlights the importance of ongoing collaboration between general education and special education in this practice (McLeskey, J, 2017). Information from functional skills assessments, such as those provided by an occupational therapist or speech language therapist, can provide critical information for general educators to use when designing accessible assessments or discussing necessary accommodations to classroom and district assessments. When differently-abled students are not making the level of progress anticipated, the data-based individualization process is a diagnostic method that can help to improve the instructional experience and promote progress in the general education curriculum through a tiered continuum of interventions.
Formative Assessment Resources
State Summative Assessment Resources
Additional Resources for a Comprehensive Assessment System
Screening
Types of Screening Resources |
Description and Resource Links |
Literacy/Dyslexia Screening |
Universal literacy screening should be administered to all students to determine early risk of future reading difficulties. A preventative approach should be used to ensure student risk is revealed early on when intervention is most effective. If a student scores low on these screeners, additional assessments should be administered to determine a student’s potential risk for dyslexia, a neurobiological weakness in phonological and orthographic processing. Screeners should include measures of Rapid Automatic Naming (RAN), phonemic awareness, real and pseudoword reading, as well as vocabulary and syntactic awareness, which have implications on prosody, fluency, and ultimately comprehension.
For additional guidance, including screening guidance by grade, please see the Massachusetts Dyslexia Guidelines
|
Early Childhood Screening |
Child Outreach is Rhode Island’s universal developmental screening system designed to screen all children ages 3 to 5 annually, prior to kindergarten entry. Developmental screenings sample developmental tasks in a wide range of areas and have been designed to determine whether a child may experience a challenge that will interfere with the acquisition of knowledge or skills. Screening results are often the first step in identifying children who may need further assessment, intervention, and/or services at an early age to promote positive outcomes in kindergarten and beyond.
Child Outreach Screening - Early Childhood Special Education - Early Childhood - Instruction & Assessment - Rhode Island Department of Education (ri.gov)
|
MLL Screening |
Screening for MLL identification involves completion of the state-approved Home Language Survey (HLS) and potential administration of a Language Screening Assessment, based on responses to the HLS. The guidance below outlines the state-adopted procedure for identifying English Learners in accordance with statue R.I.G.L.16-54-3 and regulation 200-RICR-20-30-3. Additional information on federal and state requirements for screening MLLs can be found in the assessment and placement section of the MLL Toolkit.
Multilingual Learner (MLL) Identification, Screening, Placement and Reclassification (May 2021)
|
Universal Academic Screening |
Through universal academic screening, school teams systematically and regularly analyze schoolwide data to determine the health of core instruction. Current academic performance levels from a screener are one type of academic data teams use to identify strengths and areas of need at a grade level as part of a MTSS.
Screening within an MTSS Framework
Educator Resources for high quality interim assessments
Interim Assessments - Assessment - Instruction & Assessment World-Class - Rhode Island Department of Education (RIDE)
Assessment Practices Within a Multi-Tiered System of Supports (ufl.edu)
Bailey, T. R., Colpo, A. & Foley, A. (2020). Assessment Practices Within a Multi-Tiered System of Supports (Document No. IC-18). Retrieved from University of Florida, Collaboration for Effective Educator, Development, Accountability, and Reform Center website: http://ceedar.education.ufl.edu/tools/innovationconfigurations/
|
Progress Monitoring
Progress Monitoring |
General Outcome Measures (GOM) |
GOMs measure automaticity of basic skills in reading, math, spelling and written expression as well as monitor readiness skills in literacy and numeracy. While GOMs do not measure all aspects of reading or math, they do serve as a predictive indicator of academic competence in these fundamental content areas and are typically used for setting intervention goals. |
Mastery Measures |
Mastery measures determine how much a student already knows about and where instruction should begin as well as determining when a student has mastered a particular skill taught. They help determine if the student is learning the specific skills as a result of an intervention and help identify where and how to intervene. |
Progress Monitoring Tools Chart |
This chart includes measures designed to assess progress towards end-year goal (e.g., oral reading fluency) and measures designed to assess mastery towards short-term skills (e.g., letter naming fluency). The chart reviews the peer-reviewed research on progress monitoring tools submitted by the vendors and reports on reliability, validity, bias analysis, sensitivity for reliability and validity of slope, alternate forms, decision rules, administration format, scoring time, scoring format, ROI and EOY benchmarks for each measure. Click on the tabs and tools names to see additional information including detailed data. |
IRIS Center Information Brief |
This brief describes and compares two types of progress monitoring, Mastery Measures and General Outcome Measures, providing math and ELA examples and characteristics of each measure. |
Diagnostic
Resource |
Description |
IEP Tip Sheet: Measuring Progress Toward Annual Goals | Progress Center (promotingprogress.org) |
Suggestions for what to do and what to avoid when designing progress-monitoring plans for differently-abled students plus additional resources to learn more. |
Student Progress Monitoring Tool for Data Collection and Graphing (Excel) | National Center on Intensive Intervention |
This Excel tool is designed to help educators collect academic progress-monitoring data across multiple measures as a part of the data-based individualization (DBI) process. This tool allows educators to store data for multiple students (across multiple measures), graph student progress, and set individualized goals for a student on specific measures. |
Progress Center High-Quality Academic IEP Program Goals |
Recorded webinar, resources, and materials on how to set ambitious goals for students by selecting a valid, reliable progress-monitoring measure, establishing baseline performance, choosing a strategy, and writing a measurable goal. |
Student-Level Data-Based Individualization Implementation Checklists (intensiveintervention.org) |
Teams can use these checklists to monitor implementation of the data-based individualization (DBI) process during initial planning and ongoing review (progress-monitoring) meetings. |
Tools to Support Intensive Intervention Data Meetings | National Center on Intensive Intervention (NCII) |
NCII has created a series of tools to help teams establish efficient and effective individual student data meetings. In the DBI process, the team is focused on the needs of individual students who are not making progress in their current intervention or special education program. |
Data Collection and Analysis for Continuous Improvement |
Collection and analysis of progress-monitoring data are necessary for understanding how students are progressing towards their IEP goals. These data, along with other data sources, can support ongoing instructional decision making across the continuum of supports and assist teams in evaluating the effectiveness of IEP implementation. In the Data Collection and Analysis for Continuous Improvement menu are resources and tools for progress-monitoring math and reading, selecting tools, and keeping an implementation log. |
Toolkit_Student-Progress-Monitoring.pdf (transitionta.org) |
The National Technical Assistance Center on Transition (NTACT) toolkit supports data-driven decision-making for middle and high school students to connect their academic progress and transition goals — includes 50-plus pages of sample tools. Note the inventory on reading, writing, presenting, and study habits (pp. 48–49), and the small group direction instruction recording sheet (p. 71). |
The 5 Steps of Data-based Individualization Course from the Progress Center
|
From the Progress Center, educators can build knowledge of the data-based individualization (DBI) process that is used to support a diagnostic practice and improve instruction for students with intensive learning needs. |
Alber, Rebecca. (2014). Why Formative Assessments Matter. Edutopia. Retrieved from https://www.edutopia.org/blog/formative-assessments-importance-of-rebecca-alber
Black, Paul and Wiliam, Dylan. (2010). Inside the Black Box: Raising Standards through Classroom Assessment. Phi Delta Kappan. 92(1), 81–90. *PDK_V92 (michigan.gov)
CAST (2018). Universal Design for Learning Guidelines version 2.2. Retrieved from http://udlguidelines.cast.org
Chief Council of State School Officers. (2018, 2021). “Revising the Definition of Formative Assessment.” Retrieved from https://ccsso.org/resource-library/revising-definition-formative-assessment
Kearns, D. M. (2016). Student progress monitoring tool for data collection and graphing [computer software]. Washington, DC: U.S. Department of Education, Office of Special Education Programs, National Center on Intensive Intervention.
McLeskey, J., Barringer, M D., Billingsley, B., Brownell, M., Jackson, D., Kennedy, M., Lewis, T., Maheady, L., Rodriguez, J., Scheeler, M. C., Winn, J., & Ziegler, D. (2017, January). High-leverage practices in special education. Arlington, VA: Council for Exceptional Children & CEEDAR Center.
National Center on Intensive Intervention. (2013). Data-based individualization: A framework for intensive intervention. Washington, DC: Office of Special Education Programs, U.S. Department of Education.
Rhode Island Department of Elementary and Secondary Education. (2012). Comprehensive Assessment System: Rhode Island Criteria and Guidance. Retrieved from https://www.ride.ri.gov/Portals/0/Uploads/Documents/Instruction-and-Assessment-World-Class-Standards/Assessment/CAS/CAS-Criteria-Guidance-and-Appendices-FINAL.pdf