Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Ecological momentary assessment (EMA) is a self-report method that involves intensive longitudinal assessment of behavior and environmental conditions during everyday activities. EMA has been used extensively in health and clinical psychology to investigate a variety of health behaviors, including substance use, eating, medication adherence, sleep, and physical activity. However, it has not been widely implemented in behavior analytic research. This is likely an example of the empirically based skepticism with which behavioral scientists view self-report measures. We reviewed studies comparing electronic, mobile EMA (mEMA) to more objective measures of health behavior to explore the validity of mEMA as a measurement tool, and to identify procedures and factors that may promote the accuracy of mEMA. We identified 32 studies that compared mEMA to more objective measures of health behavior or environmental events (e.g., biochemical measures or automated devices such as accelerometers). Results showed that the correspondence rates varied considerably across individuals, behavior, and studies (agreement rates ranged from 1.8%–100%), and no unifying variables could be identified across the studies that found high correspondence. The findings suggest that mEMA can be an accurate measurement tool, but further research should be conducted to identify procedures and variables that promote accurate responding. PubDate: 2022-05-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Although much has been written on the importance of translational research for bridging the continuum of basic science to clinical practice, few authors have described how such work can be carried out practically when working with patient populations in the context of ongoing clinical service delivery, where the priorities for patient care can sometimes conflict with the methods and goals of translational research. In this article, we explore some of the considerations for conducting this type of work while balancing clinical responsibilities that ensure high-quality patient care. We also discuss strategies we have found to jointly facilitate translational research and improve routine, clinical service delivery. A primary goal of this article is to encourage others working in applied settings to contribute to the increasingly important role that translational research plays in our science and practice by helping to better characterize and potentially lessen or remove barriers that may have impeded such investigations in the past. PubDate: 2022-05-02
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Countercontrol is a Skinnerian operant concept that posits that an individual’s attempts to exert control over another person’s behavior may evoke a countercontrolling response from the person being controlled that functions to avoid or escape the potentially aversive conditions generated by the controller. Despite Skinner’s historical concerns regarding the detrimental effects of countercontrol in terms of hindering optimal societal growth and cultural evolution, the concept has not been widely applied within behavior analysis. Drawing from recent developments in rule-governed behavior and relational frame theory, this article seeks to explicate countercontrol from a contemporary behavior analytic perspective and presents several modern-day societal applications. In particular, a relational frame theory account of rule-governed behavior is used as a framework to elucidate the behavioral processes by which rule-following occurs (or fails to occur) in the context of countercontrol. Implications of a renewed focus on countercontrol for understanding pressing societal issues are also discussed. PubDate: 2022-04-28
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This article outlines a graduate-level course on the philosophical, conceptual, and historical (PCH) foundations of radical behaviorism, which is the philosophy of science that underlies behavior analysis. As described, the course is for a 15-week semester, and is organized into weekly units. The units in the first half of the course are concerned with the influences of other viewpoints in the history of psychology on the development of behavior analysis and radical behaviorism. The units in the second half are concerned with the PCH foundations of eight basic dimensions of radical behaviorism. Throughout, a course examining the foundations of radical behaviorism is seen as compatible with related courses in the other three domains of behavior analysis—the experimental analysis of behavior, applied behavior analysis, and service delivery—and as integral to the education of all behavior analysts. PubDate: 2022-04-26
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The Argentine poet Jorge Luis Borges may have come closer than anyone else to envisioning a radical behavioristic aesthetics. What he said about poetry can be generalized to other art forms: that poetry happens when someone reads a poem. Art, therefore, is the behavioral episode in which someone responds to the stimuli arranged by the artist. Because each person that comes into contact with a work of art has a different history with the work and its elements, responding will vary widely for persons and for the same person at different times. An essential feature of this history is the network of derived relations involving the elements of the artwork, and the transfer and transformation of behavioral functions across this network. PubDate: 2022-04-11
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Researchers and practitioners recognize four domains of behavior analysis: radical behaviorism, the experimental analysis of behavior, applied behavior analysis, and the practice of behavior analysis. Given the omnipresence of technology in every sphere of our lives, the purpose of this conceptual article is to describe and argue in favor of a fifth domain: machine behavior analysis. Machine behavior analysis is a science that examines how machines interact with and produce relevant changes in their external environment by relying on replicability, behavioral terminology, and the philosophical assumptions of behavior analysis (e.g., selectionism, determinism, parsimony) to study artificial behavior. Arguments in favor of a science of machine behavior include the omnipresence and impact of machines on human behavior, the inability of engineering alone to explain and control machine behavior, and the need to organize a verbal community of scientists around this common issue. Regardless of whether behavior analysts agree or disagree with this proposal, I argue that the field needs a debate on the topic. As such, the current article aims to encourage and contribute to this debate. PubDate: 2022-03-31
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Ethically , behavior analysts are required to use the least aversive and restrictive procedures capable of managing behaviors of concern. This article introduces and discusses a multi-element paradigm for devising support plans that include ecological, positive programming, and focused-support proactive strategies for reducing the frequency of problem behavior occurrence. It also includes reactive strategies, i.e., separate independent variables. In this paradigm, reactive strategies are aimed solely at getting rapid, safe control over the incident, thereby reducing measured and quantified episodic severity. Behavior analysts who publish in mainstream behavioral journals do not always make it explicit how they, in fact, successfully employed non-aversive reactive procedures to achieve rapid/safe control over the severity of a behavioral incident. Three examples of published studies in the behavioral literature which successfully, though only implicitly, used non-aversive reactive strategies (NARS) to reduce the severity of the behaviors of concern are described. The multi-element paradigm discussed in the present article is illustrated by the support plans that address the challenging behavior of three children in a pre-school setting, using both proactive and reactive strategies. Reactive strategies were used for the purpose of reducing episodic severity (ES) and proactive strategies were aimed at reducing the frequency of occurrence. Following a comprehensive functional analysis and assessment (CFA) and the implementation of a multi-element behavior support (MEBS) plan, results show successful outcomes without the need for any aversive or restrictive procedures. When addressing severe behaviors of concern, in addition to reducing behavioral occurrence, safety should also be improved by reducing ES as a measured outcome and as a function of the reactive strategies employed, including in many cases, the use of strategic capitulation, i.e., providing the identified reinforcer for the target behavior. PubDate: 2022-03-25
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This study investigated the power of two-level hierarchical linear modeling (HLM) to explain variability in intervention effectiveness between participants in context of single-case experimental design (SCED) research. HLM is a flexible technique that allows the inclusion of participant characteristics (e.g., age, gender, and disability types) as moderators, and as such supplements visual analysis findings. First, this study empirically investigated the power to estimate intervention and moderator effects using Monte Carlo simulation techniques. The results indicate that larger values for the true effects and the number of participants resulted in a higher power. The more moderators added to the model, the more participants needed to detect the effects with sufficient power (i.e., power ≥.80). When a model includes three moderators, at least 20 participants are required to capture the intervention effect and moderator effects with sufficient power. For that same condition, but only including one moderator, seven participants are sufficient. Specific recommendations for designing a SCED study with sufficient power to estimate intervention and moderator effects were provided. Second, this study introduced a newly developed user-friendly point and click Shiny tool, PowerSCED. This tool assists applied SCED researchers in designing a SCED study that has sufficient power to detect intervention and moderator effects. To end, the use of HLM with the inclusion of moderators was demonstrated using two previously published SCED studies in the journal School Psychology Quarterly. PubDate: 2022-03-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Functional analysis (FA) is an integral component of behavioral assessment and treatment given that clinicians design behavioral treatments based on FA results. Unfortunately, the interrater reliability of FA data interpretation by visual analysis can be inconsistent, potentially leading to ineffective treatment implementation. Hall et al. (2020) recently developed automated nonparametric statistical analysis (ANSA) to facilitate the interpretation of FA data and Kranak et al. (2021) subsequently extended and validated ANSA by applying it to unpublished clinical data. The results of both Hall et al. and Kranak et al. support ANSA as an emerging statistical supplement for interpreting FA data. In the present article, we show how ANSA can be applied to interpret FA data collected in clinical settings in multielement and pairwise designs. We provide a detailed overview of the calculations involved, how to use ANSA in practice, and recommendations for its implementation. A free web-based application is available at https://ansa.shinyapps.io/ansa/. PubDate: 2022-03-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Access to raw data of graphs presented in original articles to calculate the effect size of single-case research is a challenge for researchers conducting studies such as meta-analysis. Researchers typically use data extraction software programs to extract raw data from the graphs in articles. In this study, we aimed to analyze the validity and reliability of the PlotDigitizer software program, which is widely used in literature and an alternative to other data extraction programs, on computers with different operating systems. We performed the digitization of 6.846 data points on three different computers using 15 hypothetical graphs with 20 data series and 186 graphs with 242 data series from 29 published articles to accomplish the goal. Besides, using the values we digitized, we recalculated the 23 effect sizes presented in the original articles for validity analysis. Based on our sampling, we calculated intercoder and intracoder Pearson correlation coefficients. The results showed that PlotDigitizer could be an alternative to other programs as it is free and can run on many current and outdated systems, and it is valid and reliable as it is nearly perfect. Based on the obtained results and considering the data extraction process, we presented various recommendations for the researchers that will use the PlotDigitizer program for the quantitative analysis of single-case graphs. PubDate: 2022-03-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Researchers report increasing trends in psychotropic medication use to treat problem behavior in individuals with intellectual and developmental disability, despite some controversy regarding its application and treatment efficacy. A substantial evidence base exists supporting behavioral intervention efficacy, however research evaluating separate and combined intervention (i.e., concurrent application of behavioral and psychopharmacological interventions) effects remains scarce. This article demonstrates how a series of analyses on clinical data collected during treatment (i.e., four case studies) may be used to retrospectively explore separate and combined intervention effects on severe problem behavior. First, we calculated individual effect sizes and corresponding confidence intervals. The results indicated larger problem behavior decreases may have coincided more often with behavioral intervention adjustments compared to medication adjustments. Second, a conditional rates analysis indicated surges in problem behavior did not reliably coincide with medication reductions. Spearman correlation analyses indicated a negative relationship between behavioral intervention phase progress and weekly episodes of problem behavior compared to a positive relationship between total medication dosage and weekly episodes of problem behavior. However, a nonparametric partial correlation analyses indicated individualized, complex relationships may exist among total medication dosage, behavioral intervention, and weekly episodes of problem behavior. We discuss potential clinical implications and encourage behavioral researchers and practitioners to consider applying creative analytic strategies to evaluate separate and combined intervention effects on problem behavior to further explore this extremely understudied topic. PubDate: 2022-03-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Publication bias is an issue of great concern across a range of scientific fields. Although less documented in the behavior science fields, there is a need to explore viable methods for evaluating publication bias, in particular for studies based on single-case experimental design logic. Although publication bias is often detected by examining differences between meta-analytic effect sizes for published and grey studies, difficulties identifying the extent of grey studies within a particular research corpus present several challenges. We describe in this article several meta-analytic techniques for examining publication bias when published and grey literature are available as well as alternative meta-analytic techniques when grey literature is inaccessible. Although the majority of these methods have primarily been applied to meta-analyses of group design studies, our aim is to provide preliminary guidance for behavior scientists who might use or adapt these techniques for evaluating publication bias. We provide sample data sets and R scripts to follow along with the statistical analysis in hope that an increased understanding of publication bias and respective techniques will help researchers understand the extent to which it is a problem in behavior science research. PubDate: 2022-03-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Multiple quantitative methods for single-case experimental design data have been applied to multiple-baseline, withdrawal, and reversal designs. The advanced data analytic techniques historically applied to single-case design data are primarily applicable to designs that involve clear sequential phases such as repeated measurement during baseline and treatment phases, but these techniques may not be valid for alternating treatment design (ATD) data where two or more treatments are rapidly alternated. Some recently proposed data analytic techniques applicable to ATD are reviewed. For ATDs with random assignment of condition ordering, the Edgington’s randomization test is one type of inferential statistical technique that can complement descriptive data analytic techniques for comparing data paths and for assessing the consistency of effects across blocks in which different conditions are being compared. In addition, several recently developed graphical representations are presented, alongside the commonly used time series line graph. The quantitative and graphical data analytic techniques are illustrated with two previously published data sets. Apart from discussing the potential advantages provided by each of these data analytic techniques, barriers to applying them are reduced by disseminating open access software to quantify or graph data from ATDs. PubDate: 2022-03-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Stimulus overselectivity remains an ill-defined concept within behavior analysis, because it can be difficult to distinguish truly restrictive stimulus control from random variation. Quantitative models of bias are useful, though perhaps limited in application. Over the last 50 years, research on stimulus overselectivity has developed a pattern of assessment and intervention repeatedly marred by methodological flaws. Here we argue that a molecular view of overselectivity, under which restricted stimulus control has heretofore been examined, is fundamentally insufficient for analyzing this phenomenon. Instead, we propose the use of the term “overselectivity” to define temporally extended patterns of restrictive stimulus control that have resulted in disproportionate populations of responding that cannot be attributed to chance alone, and highlight examples of overselectivity within the verbal behavior of children with autism spectrum disorder. Viewed as such, stimulus overselectivity lends itself to direct observation and measurement through the statistical analysis of single-subject data. In particular, we demonstrate the use of the Cochran Q test as a means of precisely quantifying stimulus overselectivity. We provide a tutorial on calculation, a model for interpretation, and a discussion of the implications for the use of Cochran’s Q by clinicians and researchers. PubDate: 2022-03-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Due to the complex nature of single-case experimental design data, numerous effect measures are available to quantify and evaluate the effectiveness of an intervention. An inappropriate choice of the effect measure can result in a misrepresentation of the intervention effectiveness and this can have far-reaching implications for theory, practice, and policymaking. As guidelines for reporting appropriate justification for selecting an effect measure are missing, the first aim is to identify the relevant dimensions for effect measure selection and justification prior to data gathering. The second aim is to use these dimensions to construct a user-friendly flowchart or decision tree guiding applied researchers in this process. The use of the flowchart is illustrated in the context of a preregistered protocol. This is the first study that attempts to propose reporting guidelines to justify the effect measure choice, before collecting the data, to avoid selective reporting of the largest quantifications of an effect. A proper justification, less prone to confirmation bias, and transparent and explicit reporting can enhance the credibility of the single-case design study findings. PubDate: 2022-03-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The articles in this special section offer strategies to single-case experimental design (SCED) researchers to interpret their outcomes, communicate their results, and compare the results using common, quantitative results. Advancing quantitative methods applied to SCED data will facilitate communication with scientists and other professionals that do not typically interpret graphed data of the dependent variable. Horner and Ferron aptly note that innovative statistical procedures are improving the precision and credibility of SCED research as disseminate our findings to an increasingly diverse audience. This special section promotes the translation of these quantitative methods to encourage their adoption in research using single case experimental designs. PubDate: 2022-02-08 DOI: 10.1007/s40614-022-00327-0
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this article, we outline an emerging role for applied behavior analysis in juvenile justice by summarizing recent publications from our lab and discussing our procedures through the lens of coercion proposed by Goltz (2020). In particular, we focus on individual and group interventions that target a range of behaviors emitted by adolescents in a residential treatment facility. In general, individual interventions involve teaching adolescents to (1) respond appropriately to staff, (2) tolerate nonpreferred environmental conditions, and (3) control problematic sexual arousal. Likewise, group interventions involve low-effort manipulations that decrease disruptive behavior and increase appropriate behavior in settings with numerous adolescents. Thereafter, we describe behavioral interventions for staff working in juvenile justice. These staff-focused interventions aim to increase staff-initiated, positive interactions with students in order to change subsequent student behavior. In addition, we review our recent endeavors to assess and conceptualize other service providers’ behavioral products (i.e., prescription practices) in a juvenile facility. Lastly, we discuss future directions of behavior-analytic intervention with juvenile-justice involved adolescents. PubDate: 2022-01-26 DOI: 10.1007/s40614-022-00325-2
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This special issue of Perspective on Behavior Science is a productive contribution to current advances in the use and documentation of single-case research designs. We focus in this article on major themes emphasized by the articles in this issue and suggest directions for improving professional standards focused on the design, analysis, and dissemination of single-case research. PubDate: 2021-11-26 DOI: 10.1007/s40614-021-00322-x
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Selecting a quantitative measure to guide decision making in single-case experimental designs (SCEDs) is complicated. Many measures exist and all have been rightly criticized. The two general classes of measure are overlap-based (e.g., percentage nonoverlapping data) and distance-based (e.g., Cohen’s d). We compare several measures from each category for Type I error rate and power across a range of designs using equal numbers of observations (i.e., 3–10) in each phase. Results showed that Tau and the distance-based measures (i.e., RD and g) provided the highest decision accuracies. Other overlap-based measures (e.g., PND, dual-criterion method) did not perform as well. It is recommended that Tau be used to guide decision making about the presence/absence of a treatment effect, and RD or g be used to quantify the magnitude of the treatment effect. PubDate: 2021-11-22 DOI: 10.1007/s40614-021-00317-8