Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Meagan Z. Plant, Kelly N. Clark Abstract: Assessment for Effective Intervention, Ahead of Print. The prevalence of student mental health concerns has increased the need for universal mental health screening to promote access to services. Some screeners determine risk status by comparing student scores to norming samples based on age (i.e., combined-gender) or on age and gender (i.e., separate-gender). This study examined scores on the Behavior Assessment System for Children–Third Edition, Behavioral and Emotional Screening System (BASC-3 BESS) using combined-gender and separate-gender norms for high school students (N = 594). There were no statistically significant differences in adolescents’ self-reported BASC-3 BESS raw scores or risk status classification across genders. These findings suggest that school teams are likely to identify students’ mental health status similarly, regardless of whether they use BESS separate-gender or combined-gender norms, although some students’ risk status is expected to vary. These findings have the potential to inform best practice recommendations for school-wide screenings of mental health and identification of students at risk. Additional implications, limitations, and future directions are discussed. Citation: Assessment for Effective Intervention PubDate: 2024-07-26T05:36:55Z DOI: 10.1177/15345084241265632
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Cherish M. Sarmiento, Adrea J. Truckenmiller Abstract: Assessment for Effective Intervention, Ahead of Print. Educators and researchers have been interested in supporting sentence-level language comprehension for struggling readers, but it has been challenging to research. To investigate the properties of sentences that might be useful targets for future research in instruction and assessment, we coded several features of the items in a computer-adaptive scale of sentence comprehension. The Syntactic Knowledge Task is designed for students in Grades 3–10. We then explored how the features of the sentences were related to the item’s difficulty value to determine which aspects of sentence-level language made sentences more and less challenging for students across a range of development. We found that genre, words that represent a logical connection, number of idea units, long words, and words on the Academic Word List were significantly associated with item difficulty. Implications for understanding students’ sentence-level language development are discussed. Citation: Assessment for Effective Intervention PubDate: 2024-07-26T05:33:44Z DOI: 10.1177/15345084241265620
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Seohyeon Choi, Kristen L. McMaster, Erica S. Lembke, Manjary Guha Abstract: Assessment for Effective Intervention, Ahead of Print. Teachers’ knowledge and skills about data-based instruction (DBI) can influence their self-efficacy and their implementation of DBI with fidelity, ultimately playing a crucial role in improving student outcomes. The purpose of this brief report is to provide evidence for the technical adequacy of a measure of DBI knowledge and skills in writing by examining its internal consistency reliability, considering different factor structures, and assessing item statistics using classical test theory and item response theory. We used responses from 154 elementary school teachers, primarily special educators, working with children with intensive early writing needs. Results from confirmatory factor analysis did not strongly favor either a one-factor solution, representing a single dimension of DBI knowledge and skills, or a two-factor solution, comprising knowledge and skills subscales. Internal consistency reliability coefficients were within an acceptable range, especially with the one-factor solution assumed. Item difficulty and discrimination estimates varied across items, suggesting the need to further investigate certain items. We discuss the potential of using the DBI Knowledge and Skills Assessment, specifically in the context of measuring teacher-level DBI outcomes in writing. Citation: Assessment for Effective Intervention PubDate: 2024-05-14T09:03:12Z DOI: 10.1177/15345084241252369
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Minglee Yong Abstract: Assessment for Effective Intervention, Ahead of Print. The use of a screening tool for school-wide screening of internalizing symptoms is an important strategy for early identification and prevention of more serious and impairing emotional and behavioral health problems in adolescents. However, threshold cut-off scores determined for screening tools may not be suitable for all populations. Using a sample of 237 Singaporean secondary school students, this study validated the Youth Internalizing Problems Screener (YIPS) for local use. Results of confirmatory factor analyses supported a one-factor solution for the construct. A threshold cut-off score of 27 was found to show good classification accuracy based on receiver operating characteristics (ROC) analyses. Correlational and path analyses provided evidence of convergent and predictive validity for using YIPS to indicate at-risk status. The YIPS status was uniquely associated with girls’ sense of school well-being over and above the nature of their interpersonal relationships and their sense of inadequacy. Overall, YIPS demonstrated comparable sensitivity and specificity rates even though a different cut-off score was used for this study sample. The use of YIPS as a screening tool in a multitier system of support and directions for future development were discussed. Citation: Assessment for Effective Intervention PubDate: 2024-04-28T04:22:53Z DOI: 10.1177/15345084241247062
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Alisha Wackerle-Hollman, Robin Hojnoski, Kristen Missall, Mohammed A. A. Abuela, Kristin Running Abstract: Assessment for Effective Intervention, Ahead of Print. Early literacy skill development predicts later reading success, and development of skills in specific domains during the preschool years has been established as both a prerequisite and precursory for reading. Early literacy assessments typically include measures of separate skills across domains, and results can assist with determining where instructions may be most needed. When multiple areas of need are identified, understanding which skills to prioritize can be a challenge. Therefore, empirically identifying the relative contribution of each skill measured in preschool to subsequent reading success can promote more efficient systems of assessment. This study, conducted in the United States, examined the predictive validity of early literacy skills measured in preschool compared to skills measured in kindergarten, with a specific practical focus on identifying the most efficient predictive model for understanding reading readiness. Participants were 119 preschoolers (mean age = 66 months) who mostly spoke English as their primary language (79%). Results indicated early literacy and language skills in preschool are highly predictive of early reading in kindergarten, accounting for 59% of the variance in a reading composite score. The most parsimonious model indicated that first sounds, letter sounds, early comprehension, and expressive vocabulary measures adequately explained 52% of the variance in children’s kindergarten reading performance. Citation: Assessment for Effective Intervention PubDate: 2024-04-26T11:34:00Z DOI: 10.1177/15345084241247059
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Matthew K. Burns Abstract: Assessment for Effective Intervention, Ahead of Print. The current study meta-analyzed 27 effects from 21 studies to determine the effect assessment of text difficulty had on reading fluency interventions, which resulted in an overall weighted effect size (ES) = 0.43 (95% CI = [0.25, 0.62], p < .001). Using reading passages that represented an instructional level based on accuracy criteria led to a large weighted effect of ES = 1.03, 95% CI = [0.65, 1.40], p < .01), which was reliably larger (p < .05) than that for reading fluency interventions that used reading passages with an instructional level based on rate criteria (weighted ES = 0.29, 95% CI = [0.07, 0.50], p < .01). Using reading passages based on leveling systems or those written at the students’ current grade level resulted in small weighted effects. The approach to determining difficulty for reading passages used in reading fluency interventions accounted for 11% of the variance in the effect (p < .05) beyond student group (no risk, at-risk, disability) and type of fluency intervention. The largest weighted effect was found for students with reading disabilities (ES = 1.14, 95% CI = [0.64, 1.65], p < .01). Citation: Assessment for Effective Intervention PubDate: 2024-04-18T10:40:14Z DOI: 10.1177/15345084241247064
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Katie Scarlett Lane Pelton, Kathleen Lynne Lane, Wendy Peia Oakes, Mark Matthew Buckman, Nathan Allen Lane, Grant E. Allen, D. Betsy McCoach, David James Royer, Eric Alan Common Abstract: Assessment for Effective Intervention, Ahead of Print. Educators across the United States have designed and implemented Comprehensive, Integrated, Three-tiered (Ci3T) models to meet K-12 students’ academic, behavioral, and social and emotional well-being needs. As part of implementation efforts, educators collect and use social validity and treatment integrity data to capture faculty and staff views of the plan’s goals, procedures, and outcomes and the degree to which the plan is implemented as designed (e.g., procedures for teaching, reinforcing, and monitoring). In this study, we re-examined the relation between social validity and treatment integrity utilizing hierarchical linear modeling with extant data from a research partnership across 27 schools in five midwestern districts. Findings suggested an educator’s fall and spring social validity score on the Primary Intervention Rating Scale (PIRS) predicted their treatment integrity scores on the Ci3T Treatment Integrity: Teacher Self-Report (CI3T TI: TSR) in the same timepoint. Schoolwide average fall PIRS scores also statistically significantly predicted spring Ci3T TI: TSR scores. Results suggested schoolwide context is important for sustained implementation of Tier 1 procedures during the first year. Findings demonstrate the complex nature of implementing a schoolwide plan, involving each individual’s behavior while also relying on others to facilitate implementation. We discuss limitations and future directions. Citation: Assessment for Effective Intervention PubDate: 2024-04-02T12:52:43Z DOI: 10.1177/15345084241239302
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Nathan A. Stevenson, Aarti P. Bellara Abstract: Assessment for Effective Intervention, Ahead of Print. By tradition, editors of Assessment for Effective Intervention (AEI) typically serve 3-year terms. As of January 1, 2024, AEI officially transitioned from outgoing editor Dr. Leanne Ketterlin Geller to incoming co-editors Drs. Aarti Bellara and Nathan Stevenson. The following article describes recent history and current state of AEI as a peer-review scientific journal. The new editorial team describes some of the challenges ahead and their vision for the future of AEI. Citation: Assessment for Effective Intervention PubDate: 2024-03-22T12:52:58Z DOI: 10.1177/15345084241240403
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Angus Kittelman, Sara Izzard, Kent McIntosh, Kelsey R. Morris, Timothy J. Lewis Abstract: Assessment for Effective Intervention, Ahead of Print. The purpose of this study was to evaluate the psychometric properties of the Self-Assessment Survey (SAS) 4.0, an updated measure assessing implementation fidelity of positive behavioral interventions and supports (PBIS). A total of 627 school personnel from 33 schools in six U.S. states completed the SAS 4.0 during the 2021–2022 school year. We evaluated data demonstrating the measure’s reliability (internal consistency, interrater reliability between PBIS team and non-team members), internal structure, and convergent validity for assessing implementation of Tier 1, 2, and 3 systems. We found strong internal consistency (overall and across subscales) and evidence regarding the internal structure as a four-factor measure. In addition, we found the SAS 4.0 (overall score and subscales) to be statistically significantly correlated with another widely used and empirically evaluated PBIS fidelity measure, the Tiered Fidelity Inventory (TFI). We found a statistically significant correlation between the SAS 4.0 and the SAS 3.0 for the Schoolwide Systems subscale but not other subscales. We discuss limitations given the current sample and describe implications for how PBIS teams can use the measure for school improvement and decision making. Citation: Assessment for Effective Intervention PubDate: 2024-03-04T04:54:08Z DOI: 10.1177/15345084241235226
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Kirsten J. Truman, Ethan R. Van Norman, David A. Klingbeil, Madeline C. Schmitt, Peter M. Nelson, David C. Parker Abstract: Assessment for Effective Intervention, Ahead of Print. Relatively little is known regarding post-intervention reading fluency outcomes for English learners (ELs) in comparison with non-EL peers, yet educators must be prepared to sustain growth for all students transitioning to less-intensive tiers of support. The purpose of this study was to investigate whether EL status moderated post-intervention maintenance effects among second- and third-grade students who transitioned back to Tier 1 instruction only due to successful performance during a Tier 2 reading fluency intervention. Piecewise multi-level models were estimated to address whether EL status uniquely predicted intervention growth patterns and the extent to which these patterns were maintained over a 12- to 13-week post-intervention period. Reading fluency scores were similar between EL and non-EL students prior to the start of and during the intervention, and all students’ performance decreased slightly immediately after support ceased. Regardless of grade level or EL status, post-intervention fluency gains generally remained smaller than those observed during intervention meriting attention to individual- and systems-level instructional considerations for ensuring continued growth. Citation: Assessment for Effective Intervention PubDate: 2024-01-25T12:08:56Z DOI: 10.1177/15345084241226593
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Malena A. Nygaard, Heather E. Ormiston, Hallie Enderle Abstract: Assessment for Effective Intervention, Ahead of Print. Limited research has examined the impact of Bounce Back (BB), a trauma-focused intervention for elementary-age students, on student academic engagement and daily classroom behavior. This study utilized both ongoing direct and indirect measures of student functioning to evaluate student progress and inform the implementation of BB. Participants were six students (four male and two female). We employed an AB single-subject design across cases with follow-up, and we collected data via the Direct Behavior Rating-Single Item Scale (DBR). We also employed a quasi-experimental pretest–posttest design using the Strengths and Difficulties Questionnaire (SDQ), Social, Academic, and Emotional Behavior Risk Screener (SAEBRS), and mySAEBRS across raters before/after intervention implementation. We found either direct or indirect assessment could be used to monitor intervention effectiveness for youth who presented with some internalizing and externalizing problems, whereas indirect measures (i.e., rating scales) showed the primary effect for youth with solely internalizing problems, and direct assessment (i.e., DBR) showed the primary effect for youth with predominantly externalizing behaviors. Selecting progress-monitoring tools based on presenting concerns is important to adequately monitor the effectiveness of school-based mental health interventions. Implications for practice are discussed. Citation: Assessment for Effective Intervention PubDate: 2024-01-05T10:37:22Z DOI: 10.1177/15345084231218614
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Richard P. Zipoli, Sujini Ramachandar Abstract: Assessment for Effective Intervention, Ahead of Print. Assessments of oral reading are widely used for screening, progress monitoring, and comprehensive evaluations. Despite the utility and technical adequacy of these tools, there are subgroups of students for whom measures of oral reading may be inappropriate. The first section of this article focuses on how tests of oral reading may underestimate word reading ability and reading fluency among four subgroups of students with speech, language, or learning difficulties. These include school-age students who demonstrate word-finding difficulties (which are common among students with a learning disability or developmental language disorder), developmental stuttering, childhood apraxia of speech, and pediatric dysarthria. The second section offers practical recommendations for more accurate assessment procedures, correct placement decisions, relevant professional learning activities, and strategic interdisciplinary teaming. Citation: Assessment for Effective Intervention PubDate: 2023-12-24T10:30:20Z DOI: 10.1177/15345084231220526
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Xin Lin, Sarah R. Powell Abstract: Assessment for Effective Intervention, Ahead of Print. Developing mathematics proficiency requires an understanding of mathematics vocabulary. Although previous research has developed several measures of mathematics vocabulary at different grade levels, no study focused solely on fraction vocabularies. We developed and tested a measure of fraction vocabulary for students in Grade 4 to determine the internal consistency and difficulty level of such a measure. Analysis indicated the measure demonstrated high internal consistency. Students, on average, answered less than one-third of fraction vocabularies correctly. We also detected performance differences between students with and without mathematics difficulty and dual-language learners and their peers. Citation: Assessment for Effective Intervention PubDate: 2023-10-16T06:16:33Z DOI: 10.1177/15345084231202407