Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Lisa Didion, Marissa J. Filderman, Greg Roberts, Sarah A. Benz, Cassandra L. Olmstead Abstract: Assessment for Effective Intervention, Ahead of Print. Rubric-based observations of pre- and inservice teachers are common practice in schools. Popular observation tools often result in minimal variation in ratings between teachers, require extensive training and time demands for raters, and provide minimal feedback for professional development. Alternatively, direct observation methods are evidenced to effectively measure instructional behaviors. Applying direct observation to audio recordings would produce quantitative scores and provide valuable feedback to teachers about their instruction. As such, the purpose of the present pilot study was to examine the reliability and efficiency of using audio recordings to measure practices related to explicit instruction. Fleiss’s kappa was modeled to determine the reliability of multiple raters. Regression and correlation examined the strength and direction of the relationship between the full length of a teacher’s lesson and the first 20 min of the lesson. Results indicate that using audio recordings is reliable with kappas ranging from .45 to .80. Based on regression analyses, the first 20 min of a teacher’s lesson is predictive of the rates of behaviors observed in a full lesson. Correlations suggest large, positive relationships between rates of behaviors in the first 20 min and the full lesson. Recommendations for future studies of audio-recorded observations and progress monitoring teacher behavior are discussed. Citation: Assessment for Effective Intervention PubDate: 2023-02-03T07:28:28Z DOI: 10.1177/15345084221148202
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Sara E. Witmer, Emily C. Bouck Abstract: Assessment for Effective Intervention, Ahead of Print. One perceived advantage of computer-based testing is that accessibility tools can be embedded within the testing format, allowing students with disabilities to use them when necessary to remove unique barriers within testing. However, an important assumption is that students activate and use the tools when needed. Initial data from large-scale computer-based testing suggest many students with disabilities are not using them; information is needed to understand why. Both computer skills and motivation are likely necessary for students to use accessibility tools; therefore, we explored whether prior computer use, math motivation, and test motivation predicted accessibility tool use on a national math test. We further explored the relationship between accessibility tool use and test performance. Accessibility tool use was relatively infrequent. Test motivation was weakly associated with text-to-speech use. Use of eliminate choice and scratchwork tools were weakly associated with performance. When combined with related empirical work, findings suggest a potential need to improve student test motivation and corresponding use of accessibility tools to improve validity of low-stakes test scores. However, given the weak relationships identified between tool use and performance, evidence-based math interventions are anticipated to be more helpful for improving math performance than mere promotion of accessibility tool use. Citation: Assessment for Effective Intervention PubDate: 2023-01-30T05:39:11Z DOI: 10.1177/15345084231152477
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Erin Dowdy, Michael J. Furlong, Karen Nylund-Gibson, Dina Arch, Tameisha Hinton, Delwin Carter Abstract: Assessment for Effective Intervention, Ahead of Print. The original Social Emotional Distress Survey–Secondary (SEDS-S) assesses adolescents’ past month’s experiences of psychological distress. Given the continued need for and use of brief measures of student social-emotional distress, this study examined a five-item version (SEDS-S-Brief) to evaluate its use for surveillance of adolescents’ wellness in schools. Three samples completed the SEDS-S-Brief. Sample 1 included a cross-sectional sample of 105,771 students from 113 California secondary schools; responses were used to examine validity evidence based on internal structure. Sample 2 included 10,770 secondary students who also completed the Social Emotional Health Survey-Secondary-2020, Mental Health Continuum–Short Form, Multidimensional Student Life Satisfaction Scale, and selected Youth Risk Behavior Surveillance items (chronic sadness and suicidal ideation). Sample 2 responses examined validity evidence based on relations to other variables. Sample 3 included 773 secondary students who completed the SEDS-S-Brief annually for 3 years, providing response stability coefficients. The SEDS-S-Brief was invariant across students based on sex, grade level, and Latinx status, supporting its use across diverse groups in schools. Additional analyses indicated moderate to strong convergent and discriminant validity characteristics and 1- and 2-year temporal stability. The findings advance the field toward comprehensive mental health surveillance practices to inform services for youth in schools. Citation: Assessment for Effective Intervention PubDate: 2022-11-23T09:14:28Z DOI: 10.1177/15345084221138947
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Erica N. Mason, Erica S. Lembke Abstract: Assessment for Effective Intervention, Ahead of Print. Replication studies in special education are necessary to strengthen the foundation upon which instruction and intervention for students with disabilities are built. J. Jenkins et al. (2017) found intermittent reading fluency progress monitoring schedules did not delay decision-making and were similar in decision-making accuracy to the traditional weekly progress monitoring schedule. Results of the current pilot study, although underpowered, conceptually replicated the original claims and extended their work by investigating their questions in the area of mathematics computation. Implications for research and practice are shared. Citation: Assessment for Effective Intervention PubDate: 2022-11-03T12:17:52Z DOI: 10.1177/15345084221133730
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Milena A. Keller-Margulis, Michael Matta, Lindsey Landry Pierce, Katherine Zopatti, Erin K. Reid, G. Thomas Schanding Abstract: Assessment for Effective Intervention, Ahead of Print. Measuring and identifying risk for reading difficulties at the kindergarten level is necessary for providing intervention as early as possible. The purpose of this study was to examine concurrent validity evidence of two kindergarten reading screeners, Acadience Reading and Texas Primary Reading Inventory (TPRI), as well as diagnostic accuracy at different performance levels on the Woodcock-Johnson IV (WJ-IV) Reading Cluster and across (n = 96) emergent bilingual and monolingual English learners in kindergarten. Findings indicated moderate correlations between Acadience Reading and TPRI with the WJ-IV. Diagnostic accuracy results showed screening measures were inadequate when predicting WJ IV performance above 90 SS (standard score), but results improved for almost all measures and student groups when the threshold for performance was lowered to 80 SS. Acadience Reading Below Benchmark (AR BB) offered the lowest overall accuracy for emerging bilingual (EB) students. Implications for efficient and accurate use of reading screeners in schools are discussed. Citation: Assessment for Effective Intervention PubDate: 2022-11-03T06:03:39Z DOI: 10.1177/15345084221133559
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Wesley A. Sims, Rondy Yu, Kathleen R. King, Danielle Zahn, Nina Mandracchia, Elissa Monteiro, Melissa Klaib Abstract: Assessment for Effective Intervention, Ahead of Print. Classroom management (CM) practices have a well-established, intuitive, and empirical connection with student academic, social, emotional, and behavioral outcomes. CM, defined as educator practices used to create supportive classroom environments, may be the implementation factor that is most impactful of the universal Tier I supports. Recognizing the importance of CM and existing deficiencies in pre- and in-service training for teachers, schools are increasingly turning to data-driven professional development activities as a solution. The current study continues the validation process of the Direct Behavior Rating-Classroom Management (DBR-CM), an efficient and flexible measure of teacher CM practices in secondary school settings. Data were collected from 140 middle and high school classrooms. Results found DBR-CM scores to be significantly correlated with several scores on concurrently completed measures of CM, including those that rely on systematic direct observation and rating scales. Findings continue the accumulation of validity evidence to address extrapolation, generalization, and theory-based inferences underlying the interpretation and intended uses of the DBR-CM. Results are promising and build on previous DBR-CM validation work. Limitations and implications are discussed. Citation: Assessment for Effective Intervention PubDate: 2022-08-26T06:02:48Z DOI: 10.1177/15345084221118316
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:María Reina Santiago-Rosario, Kent McIntosh, Sara A. Whitcomb Abstract: Assessment for Effective Intervention, Ahead of Print. This study examined teachers’ (N = 33; K-6) self-reports from the Culturally Responsive Classroom Management Self-Efficacy Scale (CRCMSE) in relation to observed classroom management practices (praise, opportunities to respond, and reprimands) and classroom level student outcomes (correct academic responses, disruptive behavior, and office discipline referrals). Additionally, we explored the relation between CRCMSE ratings, observed classroom management practices, and racial equity in school discipline. Results showed that on average, teachers rated their culturally responsive competencies moderately high. There were no significant associations between CRCMSE ratings and observed classroom practices or racial equity in discipline. However, the delivery of praise statements was strongly associated with racial equity. Possible implications for measuring cultural responsiveness using self-report are also discussed. Citation: Assessment for Effective Intervention PubDate: 2022-08-16T11:58:32Z DOI: 10.1177/15345084221118090
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Breda V. O’Keeffe, Kaitlin Bundock, Kristin Kladis, Kat Nelson First page: 67 Abstract: Assessment for Effective Intervention, Ahead of Print. Kindergarten reading screening measures typically identify many students as at-risk who later meet criteria on important outcome measures (i.e., false positives). To address this issue, we evaluated a gated screening process that included accelerated progress monitoring, followed by a simple goal/reward procedure (skill vs. performance assessment, SPA) to distinguish between skill and performance difficulties on Phoneme Segmentation Fluency (PSF) and Nonsense Word Fluency (NWF) in a multiple baseline across students design. Nine kindergarten students scored below benchmark on PSF and/or NWF at the Middle of Year benchmark assessment. Across students and skills (n = 13 panels of the study), nine met/exceeded benchmark during baseline (suggesting additional exposure to the assessments was adequate), two exceeded benchmark during goal/reward procedures (suggesting adding a motivation component was adequate), and two required extended exposure to goal/reward or skill-based review to exceed the benchmark. Across panels of the baseline, 12 of 13 skills were at/above the End-of-Year benchmark on PSF and/or NWF, suggesting lower risk than predicted by Middle-of-Year screening. Due to increasing baseline responding, experimental control was limited; however, these results suggest that simple progress monitoring may help reduce false positives after screening. Future research on this hypothesis is needed. Citation: Assessment for Effective Intervention PubDate: 2022-05-04T05:27:00Z DOI: 10.1177/15345084221091173
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Ethan R. Van Norman, Emily R. Forcht First page: 80 Abstract: Assessment for Effective Intervention, Ahead of Print. This study explored the validity of growth on two computer adaptive tests, Star Reading and Star Math, in explaining performance on an end-of-year achievement test for a sample of students in Grades 3 through 6. Results from quantile regression analyses indicate that growth on Star Reading explained a statistically significant amount of variance in performance on end-of-year tests after controlling for baseline performance in all grades. In Grades 3 through 5, the relationship between growth on Star Reading and the end-of-year test was stronger among students who scored higher on the end-of-year test. In math, Star Math explained a statistically significant amount of variance in end-of-year scores after statistically controlling for baseline performance in all grades. The strength of the relationship did not differ among students who scored lower or higher on the end-of-year test across grades. Citation: Assessment for Effective Intervention PubDate: 2022-06-06T10:50:28Z DOI: 10.1177/15345084221100421
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Michael W. Bahr, Mary Edwin, Kara A. Long First page: 90 Abstract: Assessment for Effective Intervention, Ahead of Print. This study focused on the development of the Multi-Tiered Systems of Support–Sustainability Scale (MTSS-SS). Review of the literature identified factors associated with sustainability for multi-tiered systems of support (MTSS), and it indicated few sustainability measures currently exist for practitioners and researchers to incorporate into MTSS program evaluation. This study endeavored to create a brief measure of MTSS sustainability that could be used for evaluation in research and practice by school interventionists. Study participants included a national sample of 598 school counselors and school psychologists who worked as interventionists in schools using MTSS. This group completed the 10-item MTSS-SS. Outcomes from content validity ratings, principal axis factoring, internal consistency reliability analyses, and construct validity with known groups indicated the MTSS-SS possessed initial evidence of psychometric adequacy when used by interventionists from the disciplines of school counseling or school psychology. Discussion focuses on the use of the MTSS-SS, need for further development, and study limitations. Citation: Assessment for Effective Intervention PubDate: 2022-09-02T10:12:22Z DOI: 10.1177/15345084221119418
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Lindsay M. Fallon, Sadie C. Cathcart, Austin H. Johnson, Takuya Minami, Breda V. O’Keeffe, Emily R. DeFouw, George Sugai First page: 100 Abstract: Assessment for Effective Intervention, Ahead of Print. When students require support to improve outcomes in a variety of domains, educators provide youth with school-based intervention. When educators require support to improve their professional practice, school leaders and support personnel (e.g., school psychologists) provide teachers with professional development (PD), consultation, and coaching. This multi-study article describes how the Assessment of Culturally and Contextually Relevant Supports (ACCReS) was developed with the purpose of assessment driving intervention for teachers in need of support to engage in culturally responsive practice. Items for the ACCReS were created via a multi-step process including review by both expert and practitioner panels. Then, results of an exploratory factor analysis with a national sample of teachers (N = 500) in Study 1 yielded three subscales. A confirmatory factor analysis conducted with a separate sample of teachers (N = 400) in Study 2 produced adequate model fit. In Study 3, analyses with another final sample of teachers (N = 99) indicated preliminary evidence of convergent validity between the ACCReS and two measures of teacher self-efficacy of culturally responsive practice. Data from the ACCReS can shape the content of educator intervention (e.g., PD) and promote more equitable student outcomes for youth. Citation: Assessment for Effective Intervention PubDate: 2022-08-08T06:53:22Z DOI: 10.1177/15345084221111338
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Nicole B. Wiggs, Linda A. Reddy, Ryan Kettler, Anh Hua, Christopher Dudek, Adam Lekwa, Briana Bronstein First page: 113 Abstract: Assessment for Effective Intervention, Ahead of Print. The Classroom Strategies Assessment System (CSAS) is a multi-rater, multi-method (direct observation and rating scale methodology) assessment of teachers’ use of research-based instructional and behavior management strategies. The present study investigated the association between teacher self-report and school administrator ratings using the CSAS Teacher (CSAS-T) and Observer (CSAS-O) Forms in 15 high-poverty charter schools. The CSAS-T and CSAS-O were designed to be used concurrently as a valid formative assessment of teacher practice. Findings include small, but statistically significant correlations between the CSAS-T and CSAS-O. Analysis of a multi-trait–multi-method (MTMM) matrix found teachers and observers to be measuring different constructs. No mean score differences were found between teacher self-reported instruction and behavior management strategy use compared with school administrators’ observed ratings. Furthermore, school administrators and teachers have similar ratings of overall effectiveness, with the majority of teachers in the sample being rated at or above effective. Overall, findings offer support for using the CSAS-O and CSAS-T for guiding professional development conversations. Citation: Assessment for Effective Intervention PubDate: 2022-07-29T10:31:24Z DOI: 10.1177/15345084221112858