Each term is followed by a brief definition. Click on the term for additional information.

  • Accommodation: Alterations that allow access to content
  • Adequate Yearly Progress (AYP): Continuous and substantial yearly progress toward state academic standards
  • Anecdotal Observation: Creating a narrative of events
  • Antecedent: The environmental condition in which a behavior occurs
  • Benchmark: Targeted goal or objective derived from large sample of data
  • CBA: Test stimuli taken from curriculum, tested repeatedly over time, and information is used to inform instruction
  • CBM: General outcome measurement that provides ongoing performance feedback that is used to evaluate instruction
  • Consequence: The response that occurs after a behavior
  • Diagnostic: Provides detailed information regarding a student’s current skills and knowledge
  • Event Recording: Recording each occurrence of a behavior
  • Formative Assessment: Informs teaching and student learning during instruction
  • Functional Analysis: Systematic manipulation of a potential variables maintaining a behavior
  • Modification: Alterations that modify content in order to be appropriate
  • Momentary Time Sampling: Data is recorded only at the end of a specified time interval
  • Partial Interval: If a behavior occurs at any time during an interval of time
  • Reinforcement: When a behavior, also known as a response, is maintained or increased as a result of another event that occurs after the behavior
  • Reliability: Consistently having the same or similar scores
  • Screening: Identifies individuals that are at-risk
  • Sensitivity to Change: Smallest level of learning needed to impact measurement
  • Setting Event: Something that increases how likely it is that a specific behavior will happen
  • Summative Assessment: Evaluates teaching and student learning at the end of a lesson, unit, or course
  • Task Analysis: Breaking down an academic or behavioral task into component steps
  • Validity: Warranted conclusions about relationships between variables
  • Whole Interval: If a behavior occurs for the entire duration of an interval of time

Definition: A change in the way a test or assessment is administered or responded to by the student. Categorized by setting, timing, scheduling, presentation, and method of responding.

Why it matters: Accommodations are required by law (IDEA, NCLB).

Example of use: Putting materials in braille for a student that is blind or giving extra time on a reading test for a student with a learning disability.

References:

  • Farah, Y. N. (2013). Through Another’s eyes: Modification or accommodation in standardized testing? Gifted Child Today, 36(3), 209-212.
  • Niebling, B. C., & Elliott, S. N. (2005). Testing accommodations and inclusive assessment practices. Assessment for Effective Intervention, 31(1), 1-6. doi:10.1177/073724770503100101

Definition: Mandated by No Child Left Behind Act of 2001 (NCLB), requires states to implement a single accountability system to evidence yearly progress toward state academic content standards for every student. States have some flexibility in defining and measuring AYP, but state tests are the primary measurement.

Why it matters: If a school district fails to meet AYP for two consecutive years then it is identified as in need of improvement. States develop their own rewards and sanctions, but at minimum low-performing schools have to notify parents of their status, students may transfer to another school, supplemental services must be provided, and the school must be provided additional assistance. If a school continues to fail then it may be restructured.

Additional Criteria: 95% of all students as well as 95% of each sub-group of students must take the state tests each year and each sub-group, including students with disabilities, must meet or exceed the annual expectations set by the state. Progress is tested yearly for grades 3 through 8 and in one grade in high school for reading/language arts and math. All students are to be proficient by 2013/2014.

References:

  • Schwarz, R. D., Yen, W. M., & Schafer, W. D. (2001). The challenge and attainability of goals for adequate yearly progress. Educational Measurement: Issues and Practice, 20(3), 26-33.
  • Yell, M. L. (2012). The law and special education (3rd ed.). Merrill/Prentice-Hall, Inc., 200 Old Tappan Road, Old Tappan, NJ 07675.

Definition: A narrative, or story, of the events occurring during an observation is recorded.

Why it matters: It can be helpful in guiding future data collection by providing context and information not provided through other collection methods. However, it only represents a single moment, is hard to turn into measurable data, and can be cumbersome. It is not effective for progress monitoring or intervention evaluation.

Example of use: A teacher is having trouble determining the best way to collect data and measure the progress of a student’s off-task behavior. He asks the school psychologist to conduct to anecdotal observations. They use these observations to decide the what behaviors to measure and the best way to measure them.

References:

  • Adamson, R. M., & Wachsmuth, S. T. (2014). A Review of Direct Observation Research within the Past Decade in the Field of Emotional and Behavioral Disorders. Behavioral Disorders, 39(4), 181-189. Smith, R. G., & Iwata, B. A. (1997). Antecedent influences on behavior disorders. Journal of Applied Behavior Analysis, 30(2), 343-375. doi:10.1901/jaba.1997.30-343
  • Heward, W. L. (2003). Ten faulty notions about teaching and learning that hinder the effectiveness of special education. The Journal of Special Education, 36(4), 186-205. doi:10.1177/002246690303600401

Definition: The setting(s) and their events in which a specific behavior(s) occurs.

Why it matters: Understanding when and where a behavior occurs is important in determining it’s function as well as for improving behavior change programming.

Example of use: Careful observation of antecedent events allow a school psychologist to determine that a student makes funny remarks in Math class in order to escape the class (the teacher quickly sends him out) but in Social Studies class the same behaviors result in attention from the teacher without escape from classwork.

References:

  • Gresham, F. M., Watson, T. S., & Skinner, C. H. (2001). Functional Behavioral Assessment: Principles, Procedures, and Future Directions. School Psychology Review, 30(2), 156.
  • Smith, R. G., & Iwata, B. A. (1997). Antecedent influences on behavior disorders. Journal of Applied Behavior Analysis, 30(2), 343-375. doi:10.1901/jaba.1997.30-343

Definition: A benchmark is a goal derived from a large data set that can be used to measure the progress of an individual at the class, school, and/or national level. Benchmarks provide information regarding expected performance at specific times.

Why it matters: Benchmarks allow teachers and administrators to measure progress toward state and national standards. Good benchmarks do not mirror standards but do assess critical skills/knowledge needed to meet the standards. Benchmarks provide information that teachers or administrators can use to adjust practices or identify specific individuals before end of year assessments.

Example of use: A teacher may use CBM to measure a student’s progress toward nationally normed benchmarks for correctly spelled words.

References:

  • Duckett, B. (2009). A Dictionary of Education. Reference Reviews, 23(5), 21-22.
  • Herman, J. L., & Baker, E. L. (2005). Making Benchmark Testing Work. Educational Leadership, 63(3), 48-54.

Definition: CBAs have test stimuli taken from the curriculum, are intended to be administered regularly, and inform instruction. CBM is a type of CBA. CBM measures general outcomes that incorporate many skills. CBA can also thought of in terms of mastery and progress measured according to mastery of the sub-skills needed to meet global skills. Most CBA models fall under mastery.

Why it matters: Mastery measurement CBAs are not standardized so validity and reliability are unknown. They are most often teacher made assessments for individual students.

Example of use: A teacher creates several math assessments to measure of a student toward several sub-skills needed to add or subtract mixed fractions.

References:

  • Hosp, M. K., & Hosp, J. L. (2003). Curriculum-Based Measurement for Reading, Spelling, and Math: How to Do It and Why. Preventing School Failure, 48(1), 10-17.
  • Hosp, J. L., Hosp, M. K., Howell, K. W., & ebrary, I. (2014). The ABCs of curriculum-based evaluation: A practical guide to effective decision making. New York: The Guilford Press

Definition: CBMs are standardized assessments focused on long-term goals that measure progress within many skill domains. CBM was created to be administered regularly throughout the school year. They are easy to administer and score. They provide ongoing data that is used to make instructional decisions.

Why it matters: The increase in accountability and Data Based Decisions within IDEA are addressed by the use of CBM. CBM has good validity and reliability as well as treatment validity. Treatment validity means that the assessment (CBM) also assesses the instruction or treatment being used.

Example of use: A teacher gives CBM-R to her 3rd grade class at the beginning of the year. She identifies one student that scores well below everyone else. She administers 2 additional CBM-R tasks to this student and creates a baseline. Using national benchmarks, she creates a goal line for the student and administers CBM-R once a week to measure the student’s progress. After about 6 weeks, the teacher decides the student is not making enough progress to meet his benchmark and selects an evidence-based reading intervention. The teacher continues to monitor the student weekly during the intervention.

References:

  • Fuchs, L. S., & Fuchs, D. (1999). Monitoring student progress toward the development of reading competence: A review of three forms of classroom-based assessment. School Psychology Review, 28(4), 659.
  • Hosp, M. K., & Hosp, J. L. (2003). Curriculum-Based Measurement for Reading, Spelling, and Math: How to Do It and Why. Preventing School Failure, 48(1), 10-17.

Definition: Administered prior to instruction and provides information regarding student’s current knowledge and skills. This information can be used to determine the best instruction as well as identify needed supports.

Why it matters: It allows you to create a program based upon the student’s current level of understanding. Provides the teacher with a clear starting point for instruction.

Example of use: Use of Qualitative Reading Inventory to create an instruction plan for a student in reading.

References:

  • Duckett, B. (2009). A Dictionary of Education. Reference Reviews, 23(5), 21-22.
  • National Center on Response to Intervention (June 2012). RTI Implementer Series: Module 1: Screening—Training Manual. Washington, DC: Department of Education, Office of Special Education Programs, National Center on Response to Intervention.

Definition: Record each occurrence of behavior over a set amount of time. Is best used to measure frequency or rate of a behavior.

Why it matters: Easy for most teachers to use to measure discrete behaviors (clear beginning and end) and rate for baseline can be used to measure progress in terms of increasing or decreasing rate.

Example of use: A teacher decides to use event recording for number of times a student speaks out in class without raising his hand first in Math class. The teacher collects data for 3 consecutive classes and divides the total behaviors by the total minutes to get a rate of behaviors per minute. The teacher then uses several strategies and continues to take event recording data to see if the rate of a behaviors per minute changes.

References:

  • Gast, D. L., Ledford, J., & ebrary, I. (2014). Single case research methodology [electronic resource]: Applications in special education and behavioral sciences. New York, NY: Routledge.
  • Gresham, F. M., Watson, T. S., & Skinner, C. H. (2001). Functional Behavioral Assessment: Principles, Procedures, and Future Directions. School Psychology Review, 30(2), 156.

Definition: Provides feedback to both teacher instruction and student learning on a specific content before that content is summarily evaluated. Feedback is used to adjust teaching and guide student learning.

Why it matters: Teachers know if their instruction is effective or if they need to adjust instruction prior to summative assessment. Feedback identifies individual students that are in need of additional support/attention. For students, can help them focus on areas of need and guide learning prior to summative assessment. It can foster a cooperative, instead of competitive, learning environment.

Example of use: Informal forms of a formative assessment include asking questions that are not too specific or too broad during instruction. Questions should be connected to specific content of the lesson in a way that builds toward larger goals. Student answers should be explored, not evaluated as wrong or right, and feedback should be in the form of probing or guiding questions. Bad Question: “What year did WWII end?” Better: “How did economics help end WWII?” More formal forms of a formative assessment provide data that can be used in decision-making processes. Examples of data generating formative assessment are ticket out the door, using technology such as Socrative or BlackBoard, and Curriculum Based Measurement (CBM).

References:

  • Brookhart, S., Moss, C., & Long, B. (2008). Formative Assessment That Empowers. Educational Leadership, 66(3), 52-57.
  • Dorn, S. (2010). The political dilemmas of formative assessment. Exceptional Children, 76(3), 325-337.
  • Duckor, B. (2014). Formative Assessment in Seven Good Moves. Educational Leadership, 71(6), 28-32.

Definition: Systematically manipulating potential variables that maintain a behavior through alternating test and control conditions. Common variables tested include escape, attention, and access to tangibles.

Why it matters: It systematically evidences the function of a behavior. This information can be used to create an effective intervention plan. It is a key component of an effective FBA.

Example of use: A teacher conducts a functional analyses for a student in her class and begins with attention. Data is collected on the target behaviors across all conditions. The teacher begins by giving the student attention regardless of behavior until 2 minutes is over or the target behavior occurs (control condition). The teacher then provides attention to the student only after the target behavior occurs (test condition). This should be repeated at least once more but should be repeated several times. Next, the teacher tests escape from demands and access to tangibles in the same manner.

References:

  • Fahmie, T. A., Iwata, B. A., Querim, A. C., & Harper, J. M. (2013). Test-specific control conditions for functional analyses. Journal of Applied Behavior Analysis, 46(1), 61. doi:10.1002/jaba.9
  • Kunnavatana, S. S., Bloom, S. E., Samaha, A. L., & Dayton, E. (2013). Training teachers to conduct trial-based functional analyses. Behavior Modification, 37(6), 707-722. doi:10.1177/0145445513490950

Definition: Changes content in order to allow access and for the material(s) to be appropriate for the individual.

Why it matters: Modifications are required in order to provide FAPE.

Example of use: A student in a general education Geometry class is assessed by his ability to identify basic shapes only.

References:

  • Farah, Y. N. (2013). Through Another’s eyes: Modification or accommodation in standardized testing? Gifted Child Today, 36(3), 209-212.
  • Niebling, B. C., & Elliott, S. N. (2005). Testing accommodations and inclusive assessment practices. Assessment for Effective Intervention, 31(1), 1-6. doi:10.1177/073724770503100101

Definition: Time is broken into intervals, such as 1-minute intervals for 50 minutes, and data is taken at the end of each interval. Data is recorded as percentage of intervals in which the behavior is occurring. It is assumed the behavior occurred for the entire interval. It is best for continuous activity. There are three types: fixed, variable, and planed activity check (PLACHECK). Fixed: observation period in equal intervals. Variable: intervals are of variable length. PLACHECK: is used for group behavior. At the end of the interval, the data collector records the number of participants engaging in the target behavior. Data is reported as percentage of group engaging in target behavior for each interval.

Why it matters: This method of data collection method does not require continuous observation, making it more practical for teachers or other practitioners.

Example of use: A teacher is measuring on-task behavior for a student. She sets her phone to buzz every 2 minutes. When her phone buzzes, she records whether the student is on-task or not. She assumes the behavior occurred throughout the interval when calculating the student’s time on-task.

References:

  • Gast, D. L., Ledford, J., & ebrary, I. (2014). Single case research methodology [electronic resource]: Applications in special education and behavioral sciences. New York, NY: Routledge.

Definition: Data is taken on behavior in intervals of time (ever 10 seconds or every 1 minute) A behavior is recorded as ‘occurring’ if it is observed during any portion of the interval of time. If the behavior is recorded as occurring for two 10-second intervals in a minute then this can be calculated as the behavior occurring 20 seconds every minute.

Why it matters: It is effective for measuring behaviors that have no discrete beginning or end while also allowing for duration to be estimated. If the intervals are not small enough then it can result in an overestimation of duration.

Example of use: A teacher is measuring out-of-seat behavior but event recording does not fully capture the information wanted because the student occasionally stays out of his seat for several seconds. The teacher uses partial interval data collection for 10-second intervals to measure out-of-seat behavior. The data allows the teacher to estimate amount of time out of seat (duration).

References:

  • Gast, D. L., Ledford, J., & ebrary, I. (2014). Single case research methodology [electronic resource]: Applications in special education and behavioral sciences. New York, NY: Routledge.
  • Gresham, F. M., Watson, T. S., & Skinner, C. H. (2001). Functional Behavioral Assessment: Principles, Procedures, and Future Directions. School Psychology Review, 30(2), 156.


Definition: When a behavior, also known as a response, is maintained or increased as a result of another event (consequence) that occurs after the behavior.

Positive Reinforcement: Increasing the behavior by presenting a desired stimulus dependent (contingent) upon engaging in the desired response. Example: Teacher praises student for raising their hand.

Negative Reinforcement: Increasing the behavior by removing an aversive stimulus contingent upon engaging in the desired response. Example: Teacher does not allow student to go to music class until the student has completed his or her spelling assignment.

References:

  • Alberto, P., & Troutman, A. (2009). Arranging consequences that increase behavior. In Applied Behavior Analysis for Teachers, 6th edition. (pp. 215-262). Columbus, OH: Pearson.
  • Horner, R. Sugai, G., Todd, A., & Lewis-Palmer, T. (2000). Elements of behavior support plans: A technical brief. Exceptionality, 8, 205-215.
  • Wolfgang, C. H. (2001). Solving Discipline And Classroom Management Problems: Methods and Models for Today’s Teachers; U.S.A, John Wiley and Sons.

Definition: Consistency or stability of a test in producing the same or similar scores over repeated administrations.

Why it matters: Determines how much trust you can put into a score. All assessments have some error associated with them but small associated errors mean the score a student receives is a true, or reliable, measure.

Example of use: You should be able to give a test to a student in the morning and give it again to the same students in the afternoon on a different day and get similar resuts.

References:

  • Salkind, N. J. (Ed.). (2008). Encyclopedia of educational psychology (Vol. 1). Sage Publications.

Definition: an assessment that identifies individuals as being at-risk for something that allows time for preventative action. It is not intended to diagnose.

Why it matters: Screening is vital for early intervention and prevention of later difficulties and/or disabilities. For Academics, it helps identify students that may have difficulties before they fail.

Example of use: A teacher uses the easyCBM Mathematics at the beginning of the year to identify students in her class at-risk for failure and monitors their progress closely.

References:

  • Cook, C. R., Volpe, R. J., & Livanis, A. (2010). Constructing a roadmap for future universal screening research beyond academics. Assessment for Effective Intervention, 35(4), 197-205.
  • Gersten, R., Clarke, B., Haymond, K., & Jordan, N. (2011). Screening for mathematics difficulties in K-3 students. Second edition. Portsmouth, NH: RMC Research Corporation, Center on Instruction.

Definition: The level or amount of learning needed to result in a measurable change in the assessment outcome. Assessments that are more sensitive to change are able to measure smaller increments of learning.

Why it matters: Sensitive assessments allow you to measure learning and rates of learning in order to make data based instructional decisions before the student fails. Highly sensitive instruments can provide both students and teachers positive feedback when a student is successful before other instruments.

Example of use: A teacher uses CBM-R to create a baseline and track rate of learning for a student that is struggling to read. Using CBM-R, the teacher is able to verify that the student is learning at a slower rate than his peers and decide to use mini-lessons to build vocabulary. The teacher is able to verify whether the student is improving their rate of learning or not before any summative assessments are done.

References:

  • Fuchs, L. S., & Fuchs, D. (1999). Monitoring student progress toward the development of reading competence: A review of three forms of classroom-based assessment. School Psychology Review, 28(4), 659-671.
  • Fuchs, L. S., Fuchs, D., & Courey, S. J. (2005). Curriculum-based measurement of mathematics competence: From computation to concepts and applications to real-life problem solving. Assessment for Effective Intervention, 30(2), 33-46. doi:10.1177/073724770503000204

Definition: Assessment that takes place at the end of a lesson, unit, or course of study. Formally evaluates student learning of a specific range of content.

Why it matters: It provides formal evaluation of the student’s learning and the teacher’s instruction. It can be compared across students, classes, schools, states, and even countries. It allows for student and teacher comparison/evaluation and informs overall education objectives.

Example of use: End of unit test, end of course exam, and high stakes testing associated with No Child Left Behind.

References:

  • Black, P. J. (1998). Testing, friend or foe?: the theory and practice of assessment and testing (Vol. 3). Psychology Press.
  • Duckett, B. (2009). A Dictionary of Education. Reference Reviews, 23(5), 21-22.

Definition: Required for Criterion Referenced Assessments. Breaking down of a academic or behavioral task into component skills in order to sequence instruction and target intervention.

Why it matters: It allows for clear, sequenced, and measurable instruction and assessment of progress towards complex academic or behavioral tasks.

Example of use: A teacher wants to teach a student how to put his shoes on. She creates a task analyses: can identify his shoes, can find matching pair of shoes, can put each shoe on the correct foot, and tie shoelaces.

References:

  • Hughes, S. (1982). Another Look at Task Analysis. Journal Of Learning Disabilities, 15(5)
  • Wynne, S. A. (2008). AEPA 22 Special Education. Boston: XAMonline.

Definition: An assessment is valid if it measures what it purports to measure. A statement or conclusion regarding the relationship between two or more variables is valid if it is backed by sound and rigorous research.

Why it matters: If an assessment states that it measures a numeracy skill that predicts success in later math classes but it does not actually measure the skills it claims to measure or if that skill is not really a good predictor of later success, then the test is invalid. Using invalid tests is a waste of time and money because no one is sure what it measures, if anything, or what it means if it does measure something.

Example of use: Always check the validity of a assessments and the predictive claims of those assessments.

References:

  • Rumrill Jr, P. D., Cook, B. G., & Wiley, A. L. (2011). Research in special education: Designs, methods, and applications. Charles C. Thomas, Publisher, Ltd. 2600 South First Street, Springfield, IL 62704.

Definition: Data is taken on behavior in intervals of time (i.e., ever 10 seconds or every 1 minute). A behavior is recorded as ‘occurring’ if it occurs for the entire interval of time. If the behavior is recorded as occurring for two 10-second intervals in a minute then this can be calculated as the behavior occurring 20 seconds every minute.

Why it matters: It provides an estimate of time/duration but, if the intervals are not small enough, could result in an underestimate of duration.

Example of use: A teacher is measuring on-task behavior for a student using whole interval data collection for a 10 second intervals in her class.

References:

  • Gast, D. L., Ledford, J., & ebrary, I. (2014). Single case research methodology [electronic resource]: Applications in special education and behavioral sciences. New York, NY: Routledge.
  • Gresham, F. M., Watson, T. S., & Skinner, C. H. (2001). Functional Behavioral Assessment: Principles, Procedures, and Future Directions. School Psychology Review, 30(2), 156.