Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Ethan R. Van Norman, Emily R. Forcht Abstract: Assessment for Effective Intervention, Ahead of Print. This study explored the validity of growth on two computer adaptive tests, Star Reading and Star Math, in explaining performance on an end-of-year achievement test for a sample of students in Grades 3 through 6. Results from quantile regression analyses indicate that growth on Star Reading explained a statistically significant amount of variance in performance on end-of-year tests after controlling for baseline performance in all grades. In Grades 3 through 5, the relationship between growth on Star Reading and the end-of-year test was stronger among students who scored higher on the end-of-year test. In math, Star Math explained a statistically significant amount of variance in end-of-year scores after statistically controlling for baseline performance in all grades. The strength of the relationship did not differ among students who scored lower or higher on the end-of-year test across grades. Citation: Assessment for Effective Intervention PubDate: 2022-06-06T10:50:28Z DOI: 10.1177/15345084221100421
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Evan J. Basting, Shereen Naser, Elizabeth A. Goncy Abstract: Assessment for Effective Intervention, Ahead of Print. The BASC-3 Behavioral and Emotional Screening System Student Form (BESS SF) is the latest iteration of a widely used instrument for identifying students at behavioral and emotional risk. Measurement invariance across race/ethnicity and gender for the latest BESS SF has not yet been established. Using a sample of 737 urban fourth- to eighth-grade students, we tested competing models of the BESS SF to determine the best-fitting factor structure. We also tested for measurement equivalence by race/ethnicity (i.e., White, Black, Latinx) and gender (i.e., boys, girls). Consistent with prior findings, we identified that a bifactor structure of the BESS SF best fit the data and supported measurement equivalence across race/ethnicity and gender. These findings provide further support for using the BESS SF to conduct universal behavioral and emotional screening among diverse students. More research is needed in schools serving students with greater racial/ethnic and socioeconomic diversity. Citation: Assessment for Effective Intervention PubDate: 2022-05-19T10:50:17Z DOI: 10.1177/15345084221095440
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Parmaksiz Leonid, Tatjana Kanonire Abstract: Assessment for Effective Intervention, Ahead of Print. The Rasch/Guttman scenario (RGS) measurement approach is a promising test development methodology. The purpose of this study is to compare the RGS measure of primary school students’ motivation against more traditional self-report scales. The Scenario Scale of Extrinsic Motivation toward Math (SSEM-M) and its traditional counterpart was developed. The sample consisted of 1,299 primary school students. Both measures demonstrated solid psychometric properties and sound evidence of validity. The comparative part of the research revealed notable differences in scores and factor structure. Scenario item composition appears to provide a slightly better motivation measurement than traditional composition. Further research considering response style and social desirability effects may be of interest. Citation: Assessment for Effective Intervention PubDate: 2022-05-19T10:48:19Z DOI: 10.1177/15345084221091172
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Breda V. O’Keeffe, Kaitlin Bundock, Kristin Kladis, Kat Nelson Abstract: Assessment for Effective Intervention, Ahead of Print. Kindergarten reading screening measures typically identify many students as at-risk who later meet criteria on important outcome measures (i.e., false positives). To address this issue, we evaluated a gated screening process that included accelerated progress monitoring, followed by a simple goal/reward procedure (skill vs. performance assessment, SPA) to distinguish between skill and performance difficulties on Phoneme Segmentation Fluency (PSF) and Nonsense Word Fluency (NWF) in a multiple baseline across students design. Nine kindergarten students scored below benchmark on PSF and/or NWF at the Middle of Year benchmark assessment. Across students and skills (n = 13 panels of the study), nine met/exceeded benchmark during baseline (suggesting additional exposure to the assessments was adequate), two exceeded benchmark during goal/reward procedures (suggesting adding a motivation component was adequate), and two required extended exposure to goal/reward or skill-based review to exceed the benchmark. Across panels of the baseline, 12 of 13 skills were at/above the End-of-Year benchmark on PSF and/or NWF, suggesting lower risk than predicted by Middle-of-Year screening. Due to increasing baseline responding, experimental control was limited; however, these results suggest that simple progress monitoring may help reduce false positives after screening. Future research on this hypothesis is needed. Citation: Assessment for Effective Intervention PubDate: 2022-05-04T05:27:00Z DOI: 10.1177/15345084221091173
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Katherine A. Koller, Robin L. Hojnoski, Ethan R. Van Norman Abstract: Assessment for Effective Intervention, Ahead of Print. A strong foundation in early literacy supports children’s academic pursuits and impacts personal, social, and economic outcomes. Therefore, examining the adequacy of early literacy assessments as predictors of future performance on important outcomes is critical for identifying students at risk of reading problems. This study explored the predictive validity of preschoolers’ literacy skills measured in the spring, with the Individual Growth and Development Indicators 2.0 (IGDIs 2.0) to performance in the fall and winter of kindergarten as assessed by the Dynamic Indicators of Basic Early Literacy Skills Next Edition (DIBELS Next) using Pearson product-moment correlations. In addition, the classification accuracy of student performance on the IGDIs 2.0 measures to the publisher-identified benchmark scores on DIBELS Next assessment in kindergarten was examined by calculating the sensitivity, specificity, positive and negative predictive power, overall correct classification, and kappa. Participants included 537 children from ethnically diverse backgrounds enrolled in an urban school district in the northeast. Results indicated small to moderate relations between the individual IGDIs 2.0 tasks and DIBELS Next measures. Classification accuracy of student performance on the IGDIs 2.0 measures to the publisher-identified benchmark score on DIBELS Next composite in the fall and winter of kindergarten revealed inadequate levels of sensitivity; however, locally-derived cut-scores improved sensitivity and specificity. Citation: Assessment for Effective Intervention PubDate: 2022-03-05T05:36:13Z DOI: 10.1177/15345084221081091
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Lia E. Sandilos, James C. DiPerna Abstract: Assessment for Effective Intervention, Ahead of Print. The creation of psychometrically sound assessments of teacher well-being is critical given the alarmingly high rates of teacher burnout reported among U.S. educators. The present study sought to address this need by developing the Measures of Stressors and Supports for Teachers (MOST), a teacher-report questionnaire designed to assess ecological and psychological factors that affect teachers’ professional well-being. To assess structural validity, the MOST was administered to a sample of K–12 educators (N = 218). Methods outlined in Classical Test Theory and exploratory factor analysis were conducted to examine items and assess the factor structure of the MOST. Factor analytic findings yielded a 40-item, nine-factor structure (Parents, Colleagues, School Leadership and Belonging, Classroom Students, Students With Disabilities, Time Pressure, Professional Development, Safety, and Emotional State). Implications for further validation and use of the MOST are discussed. Citation: Assessment for Effective Intervention PubDate: 2022-03-03T05:01:06Z DOI: 10.1177/15345084211061338
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Amna A. Agha, Adrea J. Truckenmiller, Jodene G. Fine, Megan Perreault Abstract: Assessment for Effective Intervention, Ahead of Print. The development of written expression includes transcription, text generation, and executive functions (including planning) interacting within working memory. However, executive functions are not formally measured in school-based written expression tasks although there is an opportunity for examining students’ advance planning—a key manifestation of executive functions. We explore the influence of advance planning on Grade 2 written expression using curriculum-based measurement in written expression (CBM-WE) probes with a convenience sample of 126 students in six classrooms. Controlling for transcription, which is typically the primary focus of instruction in early elementary grades, we found that a score on advance planning explained additional significant variance in writing quantity and accuracy. Results support that planning may be an additional score to add to the use of CBM-WE. Implications for assessment and further research on the early development of planning and executive functions related to written expression are explored. Citation: Assessment for Effective Intervention PubDate: 2022-02-03T11:01:09Z DOI: 10.1177/15345084211073601
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Sofia O. Major, Maria J. Seabra-Santos, Roy P. Martin Abstract: Assessment for Effective Intervention, Ahead of Print. The early identification of social-emotional and behavioral problems of preschool children has become an important goal in research and clinical practice. A growing number of studies have been published in this field; however, most focus on behavior problems, or on social skills, but few on both. The present study aims to test the validity of the Portuguese version of the Preschool and Kindergarten Behavior Scales–Second Edition (PKBS-2) in differentiating two groups of preschoolers regarding their social skills and behavior problems: 41 children at risk for disruptive behavior (BP group) and 41 selected from the PKBS-2 normative sample (comparison group). Each child was rated with the PKBS-2 by parents and teachers. Results showed that children in the BP group were rated by their parents as having fewer social skills and more behavior problems than the comparison group (p < .01, for the majority of the PKBS-2 scores). A similar pattern was found for teachers’ ratings. The discriminant functional analysis highlighted the Social Cooperation and the Externalizing Problem Behavior subscales as most accurate in differentiating the two groups. The usefulness of the PKBS-2 Portuguese version as a valid assessment tool available for practice and research with preschoolers was supported. Citation: Assessment for Effective Intervention PubDate: 2022-02-01T05:57:28Z DOI: 10.1177/15345084211073604
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Meaghan McKenna, Robert F. Dedrick, Howard Goldstein Abstract: Assessment for Effective Intervention, Ahead of Print. This article describes the development of the Early Elementary Writing Rubric (EEWR), an analytic assessment designed to measure kindergarten and first-grade writing and inform educators’ instruction. Crocker and Algina’s (1986) approach to instrument development and validation was used as a guide to create and refine the writing measure. Study 1 describes the development of the 10-item measure (response scale ranges from 0 = Beginning of Kindergarten to 5 = End of First Grade). Educators participated in focus groups, expert panel review, cognitive interviews, and pretesting as part of the instrument development process. Study 2 evaluates measurement quality in terms of score reliability and validity. Data from writing samples produced by 634 students in kindergarten and first-grade classrooms were collected during pilot testing. An exploratory factor analysis was conducted to evaluate the psychometric properties of the EEWR. A one-factor model fit the data for all writing genres and all scoring elements were retained with loadings ranging from 0.49 to 0.92. Internal consistency reliability was high and ranged from .89 to .91. Interrater reliability between the researcher and participants varied from poor to good and means ranged from 52% to 72%. First-grade students received higher scores than kindergartners on all 10 scoring elements. The EEWR holds promise as an acceptable, useful, and psychometrically sound measure of early writing. Further iterative development is needed to fully investigate its ability to accurately identify the present level of student performance and to determine sensitivity to developmental and instruction gains. Citation: Assessment for Effective Intervention PubDate: 2021-12-31T01:53:06Z DOI: 10.1177/15345084211065977
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Trude Nergård-Nilssen, Oddgeir Friborg Abstract: Assessment for Effective Intervention, Ahead of Print. This article describes the development and psychometric properties of a new Dyslexia Marker Test for Children (Dysmate-C). The test was designed to identify Norwegian students who need special instructional attention. The computerized test includes measures of letter knowledge, phoneme awareness, rapid automatized naming, working memory, decoding, and spelling skills. Data were collected data from a sample of more than 1,100 students. Item response theory (IRT) was used for the psychometric evaluation, and principal component analysis for checking uni-dimensionality. IRT was further used to select and remove items, which significantly shortened the test battery without sacrificing reliability or discriminating ability. Cronbach’s alphas ranged between .84 and .95. Validity was established by examining how well the Dysmate-C identified students already diagnosed with dyslexia. Logistic regression and receiver operating characteristic (ROC) curve analyses indicated good to excellent accuracy in separating children with dyslexia from typical children (area under curve [AUC] = .92). The Dysmate-C meets the standards for reliability and validity. The use of regression-based norms, voice-over instructions, easy scoring procedures, accurate timing, and automatic computation of scores, make the test a useful tool. It may be used in as part screening procedure, and as part of a diagnostic assessment. Limitations and practical implications are discussed. Citation: Assessment for Effective Intervention PubDate: 2021-12-29T05:27:30Z DOI: 10.1177/15345084211063533
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Benjamin G. Solomon, Ole J. Forsberg, Monelle Thomas, Brittney Penna, Katherine M. Weisheit Abstract: Assessment for Effective Intervention, Ahead of Print. Bayesian regression has emerged as a viable alternative for the estimation of curriculum-based measurement (CBM) growth slopes. Preliminary findings suggest such methods may yield improved efficiency relative to other linear estimators and can be embedded into data management programs for high-frequency use. However, additional research is needed, as Bayesian estimators require multiple specifications of the prior distributions. The current study evaluates the accuracy of several combinations of prior values, including three distributions of the residuals, two values of the expected growth rate, and three possible values for the precision of slope when using Bayesian simple linear regression to estimate fluency growth slopes for reading CBM. We also included traditional ordinary least squares (OLS) as a baseline contrast. Findings suggest that the prior specification for the residual distribution had, on average, a trivial effect on the accuracy of the slope. However, specifications for growth rate and precision of slope were influential, and virtually all variants of Bayesian regression evaluated were superior to OLS. Converging evidence from both simulated and observed data now suggests Bayesian methods outperform OLS for estimating CBM growth slopes and should be strongly considered in research and practice. Citation: Assessment for Effective Intervention PubDate: 2021-08-30T08:41:48Z DOI: 10.1177/15345084211040219
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Marika King, Anne L. Larson, Jay Buzhardt Abstract: Assessment for Effective Intervention, Ahead of Print. Few, if any, reliable and valid screening tools exist to identify language delay in young Spanish–English speaking dual-language learners (DLLs). The early communication indicator (ECI) is a brief, naturalistic measure of expressive communication development designed to inform intervention decision-making and progress monitoring for infants and toddlers at-risk for language delays. We assessed the accuracy of the ECI as a language-screening tool for DLLs from Latinx backgrounds by completing classification accuracy analysis on 39 participants who completed the ECI and a widely used standardized reference, the Preschool Language Scales, 5th edition—Spanish, (PLS-5 Spanish). Sensitivity of the ECI was high, but the specificity was low, resulting in low classification accuracy overall. Given the limitations of using standalone assessments as a reference for DLLs, a subset of participants (n = 22) completed additional parent-report measures related to identification of language delay. Combining the ECI with parent-report data, the specificity of the ECI remained high, and the sensitivity improved. Findings show preliminary support for the ECI as a language-screening tool, especially when combined with other information sources, and highlight the need for validated language assessment for DLLs from Latinx backgrounds. Citation: Assessment for Effective Intervention PubDate: 2021-06-30T06:18:51Z DOI: 10.1177/15345084211027138
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Christopher L. Thomas, Staci M. Zolkoski, Sarah M. Sass First page: 127 Abstract: Assessment for Effective Intervention, Ahead of Print. Educators and educational support staff are becoming increasingly aware of the importance of systematic efforts to support students’ social and emotional growth. Logically, the success of social-emotional learning programs depends upon the ability of educators to assess student’s ability to process and utilize social-emotional information and use data to guide programmatic revisions. Therefore, the purpose of the current examination was to provide evidence of the structural validity of the Social-Emotional Learning Scale (SELS), a freely available measure of social-emotional learning, within Grades 6 to 12. Students (N = 289, 48% female, 43.35% male, 61% Caucasian) completed the SELS and the Strengths and Difficulties Questionnaire. Confirmatory factor analyses of the SELS failed to support a multidimensional factor structure identified in prior investigations. The results of an exploratory factor analysis suggest a reduced 16-item version of the SELS captures a unidimensional social-emotional construct. Furthermore, our results provide evidence of the internal consistency and concurrent validity of the reduced-length version of the instrument. Our discussion highlights the implications of the findings to social and emotional learning educational efforts and promoting evidence-based practice. Citation: Assessment for Effective Intervention PubDate: 2021-01-06T11:32:13Z DOI: 10.1177/1534508420984522
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jacqueline Huscroft-D’Angelo, Jessica Wery, Jodie D. Martin-Gutel, Corey Pierce, Kara Loftin First page: 137 Abstract: Assessment for Effective Intervention, Ahead of Print. The Scales for Assessing Emotional Disturbance Screener–Third Edition (SAED-3) is a standardized, norm-referenced measure designed to identify school-aged students at risk of emotional and behavioral problems. Four studies are reported to address the psychometric status of the SAED-3 Screener. Study 1 examined the internal consistency of the Screener using a sample of 1,430 students. Study 2 investigated the interrater reliability of the Screener results across 123 pairs of teachers who had worked with the student for at least 2 months. Study 3 assessed the extent to which the results from the Screener are consistent over time by examining test–retest reliability. Study 4 examined convergent validity by comparing the Screener to the Strength and Difficulties Questionnaire (SDQ). Across all studies, samples were drawn from populations of students included in the nationally representative normative sample. The averaged coefficient alpha for the Screener was .88. Interrater reliability coefficient for the composite was .83. Test–retest reliability of the composite was .83. Correlations with the SDQ subscales ranged from .74 to .99, and the correlation of the Screener to the SDQ composite was .99. Limitations and implications for use of the Screener are discussed. Citation: Assessment for Effective Intervention PubDate: 2021-07-14T06:28:10Z DOI: 10.1177/15345084211030840
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Allison R. Lombardi, Graham G. Rifenbark, Marcus Poppen, Kyle Reardon, Valerie L. Mazzotti, Mary E. Morningstar, Dawn Rowe, Sheida K. Raley First page: 147 Abstract: Assessment for Effective Intervention, Ahead of Print. In this study, we examined the structural validity of the Secondary Transition Fidelity Assessment (STFA), a measure of secondary schools’ use of programs and practices demonstrated by research to lead to meaningful college and career outcomes for all students, including students at-risk for or with disabilities, and students from diverse backgrounds. Drawing from evidence-based practices endorsed by the National Technical Assistance Center for Transition and the Council for Exceptional Children’s Division on Career Development and Transition, the instrument development and refinement process was iterative and involved collecting stakeholder feedback and pilot testing. Responses from a national sample of educators (N = 1,515) were subject to an exploratory factor analysis resulting in five measurable factors: (a) Adolescent Engagement, (b) Inclusive and Tiered Instruction, (c) School-Family Collaboration, (d) District-Community Collaboration, and (e) Professional Capacity. The 5-factor model was subject to a confirmatory factor analysis which resulted in good model fit. Invariance testing on the basis of geographical region strengthened validity evidence and showed a high level of variability with regard to implementing evidence-based transition services. Findings highlight the need for consistent and regular use of a robust, self-assessment fidelity measure of transition service implementation to support all students’ transition to college and career. Citation: Assessment for Effective Intervention PubDate: 2021-05-25T08:31:32Z DOI: 10.1177/15345084211014942
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Martin T. Peters, Karin Hebbecker, Elmar Souvignier First page: 157 Abstract: Assessment for Effective Intervention, Ahead of Print. Monitoring learning progress enables teachers to address students’ interindividual differences and to adapt instruction to students’ needs. We investigated whether using learning progress assessment (LPA) or using a combination of LPA and prepared material to help teachers implement assessment-based differentiated instruction resulted in improved reading skills for students. The study was conducted in second-grade classrooms in general primary education, and participants (N = 33 teachers and N = 619 students) were assigned to one of three conditions: a control group (CG); a first intervention group (LPA), which received LPA only; or a second intervention group (LPA-RS), which received a combination of LPA and material for differentiated reading instruction (the “reading sportsman”). At the beginning and the end of one school year, students’ reading fluency and reading comprehension were assessed. Compared with business-as-usual reading instruction (the CG), providing teachers with LPA or both LPA and prepared material did not lead to higher gains in reading competence. Furthermore, no significant differences between the LPA and LPA-RS conditions were found. Corresponding analyses for lower- and higher-achieving students also revealed no differences between the treatment groups. Results are discussed regarding the implementation of LPA and reading instruction in general education. Citation: Assessment for Effective Intervention PubDate: 2021-05-20T10:37:13Z DOI: 10.1177/15345084211014926
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jillian Dawes, Benjamin Solomon, Daniel F. McCleary, Cutler Ruby, Brian C. Poncy First page: 170 Abstract: Assessment for Effective Intervention, Ahead of Print. The current availability of research examining the precision of single-skill mathematics (SSM) curriculum-based measurements (CBMs) for progress monitoring is limited. Given the observed variance in administration conditions across current practice and research use, we examined potential differences between student responding and precision of slope when SSM-CBMs were administered individually and in group (classroom) conditions. No differences in student performance or measure precision were observed between conditions, indicating flexibility in the practical and research use of SSM-CBMs across administration conditions. In addition, findings contributed to the literature examining the stability of SSM-CBMs slopes of progress when used for instructional decision-making. Implications for the administration and interpretation of SSM-CBMs in practice are discussed. Citation: Assessment for Effective Intervention PubDate: 2021-07-29T07:51:47Z DOI: 10.1177/15345084211035055
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Børge Strømgren, Kalliu Carvalho Couto First page: 179 Abstract: Assessment for Effective Intervention, Ahead of Print. Norwegian schools are obliged to develop students’ social competences. Programs used are School-Wide Positive Behavioral Interventions and Supports (PBIS) or classroom-based aimed to teach students social-emotional (Social and Emotional Learning [SEL]) skills in a broad sense. Some rating scales have been used to assess the effect of SEL programs on SEL skills. We explored the Norwegian version of the 12-item Social Emotional Assets and Resilience Scales–Child–Short Form (SEARS-C-SF). An exploratory factor analysis (EFA) was performed, proposing a one-factor solution which was confirmed by a confirmatory factor analysis (CFA). The scale reliability of .84 (λ2), means and standard deviations, as well as Tier levels were compared with the original short form. Finally, concurrent, discriminant, and convergent validity with different Strengths and Difficulties Questionnaire (SDQ) subscales were shown. Citation: Assessment for Effective Intervention PubDate: 2021-11-09T05:55:10Z DOI: 10.1177/15345084211055473