Authors:Andrew Mott, Catriona McDaid, Jamie J Kirkham, Catherine Hewitt, Luke Strachan, Helen Fulbright Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. BackgroundResearch waste is a costly problem for scientific research, with poor design and conduct of the research being key elements which contribute to wastage. Interventions to address poor design and conduct may save time and money. The objective of the study was to map the interventions that have been evaluated for improving the design or conduct of scientific research to identify any gaps in the evidence.MethodsWe undertook a systematic scoping review. We searched MEDLINE, EMBASE, EconLit, ERIC, Social Policy and Practice, HMIC, ProQuest Dissertations and Theses Global and MetaArXiv from 1st January 2012 to 13th June 2022. Evaluated interventions that aimed to improve the design or conduct of scientific research by targeting researchers or research teams were included. Screening was completed by two reviewers and data charting by a single reviewer with another reviewer checking.ResultsA total of 81 evaluated interventions were included. Most of the interventions targeted research conduct, primarily focussed on registration, publishing, and reporting. Most included studies used observational evaluation methods. Categorising the interventions by the behaviour change wheel framework we found that most studies utilised restriction, coercion, and persuasion and fewer used enablement, training, or incentivisation to achieve their aims.ConclusionsMore evaluations of interventions aimed at how researchers design their research are needed, these should be developed appropriately and evaluated for effectiveness using experimental methods. Citation: Research Methods in Medicine & Health Sciences PubDate: 2024-07-31T06:33:45Z DOI: 10.1177/26320843241270517
Authors:Ella Tuohy, Alana Murphy-Dooley, Sarah Jane Flaherty, Catherine Duggan, Barbara Foley, Rachel Flynn Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. BackgroundMembers of the public differ regarding their views on the use and sharing of personal health information.ObjectiveThis paper describes the methodology employed to develop a telephone-based survey tool to capture views of the public on the collection, use and sharing of personal health information.MethodA rigorous methodology comprising multiple stages was undertaken to develop a vignette/scenario-based survey instrument. These steps included a review of instruments used in other jurisdictions, focus groups, engagement meetings with healthcare professionals, cognitive testing and piloting the final instrument. Informed by the findings of each survey development phase, draft scenarios and accompanying questions were developed.ResultsThe following scenarios were developed: ‘Circle of care,’ ‘Use of information beyond your direct care’ and ‘Digital records.’ConclusionThe findings from this survey will inform national policy in relation to health information and will inform the development and implementation of eHealth initiatives. In turn, this should support the delivery of high-quality, effective health and social care. The learnings from the development of this survey will contribute to future health information policy and governance in countries or jurisdictions considering the development of a national electronic health record system. Moreover, this research will support public and population health management by encouraging public engagement to support successful implementation of new health information systems. Citation: Research Methods in Medicine & Health Sciences PubDate: 2024-07-22T11:33:31Z DOI: 10.1177/26320843241265957
Authors:Rosalind Way, Adwoa Parker, David J Torgerson Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. Background and AimsPoor retention of trial participants is common and can result in significant methodological, statistical, ethical, and financial challenges. To improve trial efficiency, we aimed to assess the extent to which commonly used strategies to retain participants within trials are supported by evidence for their effectiveness.MethodA systematic methodological review was carried out to identify commonly used retention strategies in National Institute for Health and Care Research (NIHR) Health Technology Assessment (HTA) trials (January 2020–June 2022). Strategies were then mapped to evidence for their effectiveness from the most recent Cochrane retention review (published 2021), and a future Study Within A Trial (SWAT) priority list was created.ResultsAmongst 80 trials, the most frequently reported retention strategies were: flexibility with data collection method/location (53%); participant diaries (38%); use of routine data (29%); PPI input (26%); telephone reminders for participants (26%); postal reminders for participants (25%); monitoring approaches (21%); offering flexibility with timing of data collection (20%); pre-paid return postage (18%); prioritising collection of key outcomes (15%); and participant newsletters (15%). Out of the 56 identified strategies, mostly no, very low or low evidence for their effectiveness was identified (64%; 14%; 13% respectively).Discussion and ConclusionsCommonly used retention strategies are lacking good quality evidence for their effectiveness. The findings support the need for more SWATs and help identify priority areas for future SWAT research. These priorities could be used with other priority lists to inform future SWAT conduct. Citation: Research Methods in Medicine & Health Sciences PubDate: 2024-03-21T11:45:06Z DOI: 10.1177/26320843241235580
Authors:Yitagesu Habtu, Wakgari Deressa Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. IntroductionThe “Results section” is a vital organ of a scientific paper, the main reason why readers come and read to find new information in the paper. However, writing the “Results section” demands a rigorous process that discourages many researchers, leaving their work unpublished and uncommunicated in reputable journals. Therefore, this review aims to describe the content, structure, and key standards of writing the “results section” and suggest practical recommendations to reduce common errors in writing the “results section” of manuscripts.MethodsWe searched the literature using search terms in the PubMed database. We also traditionally searched for literature from “Google Scholars”, “Google”, and “Websites”. We narrated and summarized finding sections as the content, structure, writing and presenting of data, logical flow of, and advised on the writing of the “results section” of the scientific paper.ResultThe review suggests guidelines for writing the content and organization, techniques of tabular, and graphical presentation of data in the “results section”. The review also suggests experiences on how to effectively present data in numbers such as percentages, statistical measures such as p-values, and other advanced forms of statistics, Finally, the review recommends relevant points to keep language brevity and logical flow in writing the “results section” of a scientific paper.ConclusionsWriting a result section of a scientific paper requires practice and it must be concisely written, logically structured, and supported by a good journal-specific standard to be published. Citation: Research Methods in Medicine & Health Sciences PubDate: 2024-03-02T06:51:53Z DOI: 10.1177/26320843241237444
Authors:Trevor Lopatin, Michael Ko, Elise Brown, Daniel Goble, Joshua Haworth Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. BackgroundType 2 Diabetes (T2D) is associated with a higher magnitude of static postural sway. This investigation compared three statistical methods to explore the role of sensory modalities contributions to posture in T2D.Research design and methodsTwo groups were evaluated in this study (n = 20); T2D group - 10 participants with T2D (age 54.6 ± 11.09 years), comparison group - 10 age/sex matched healthy participants (age 53.18 ± 9.89 years). Postural sway data was collected using the modified Clinical Test of Sensory Integration in Balance (mCTSIB), consisting of four 20-s trials on a balance plate with manipulations of vision and support surface to target the contributions of proprioceptive, visual, and vestibular senses. Scores were assessed by group wise analysis of path length, group wise analysis of percentile rank, and distribution of percentile rank.ResultsThe two-way ANOVA used for the group wise analysis of path length and percentile rank showed significant differences between groups scores (p < .05), but no significant interactions between group and condition. The frequency distribution of percentile rank of the T2D group revealed unimodal distributions for all conditions except for vestibular, which was found to have the highest and lowest percentile ranks of any condition.ConclusionThe results show that the individualized normative analysis revealed aspects of individual impairments that would have otherwise been missed using a group-wise method. Though limited, our findings also suggest that impairments to the vestibular system may be more pronounced but less frequent compared to proprioceptive and visual impairments. Citation: Research Methods in Medicine & Health Sciences PubDate: 2024-02-17T02:30:33Z DOI: 10.1177/26320843241235582
Authors:Jian Gao Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. Background and aimsThe findings on the relationship between sodium intake and health outcomes such as cardiovascular disease and all-cause mortality have been controversial. Some studies found the relationship between sodium intake and all-cause mortality was linear while others found a U-shaped or J-shaped relationship. This study aimed to identify the methodological issues contributing to the conflicting findings.Methods and resultsThe present study investigated methodological gaps in assessing the relationship between sodium intake and health outcomes (hypertension, cardiovascular disease, and all-cause mortality). The contradictory findings appear to stem from flawed methods used in the published studies: (1) Both spot and 24-h urinary sodium collection methods underestimate the adverse effects of low sodium intake and overestimate the harmful effects of high sodium intake, (2) the linear relationship between sodium intake and all-cause mortality appears to be a result of random chance due to small sample sizes, and (3) the divergent temporal trends of sodium consumption and hypertension prevalence indicate sodium intake was not the primary cause of the worldwide hypertension epidemic.ConclusionConsidering, (1) sodium is an essential nutrient, (2) the adverse effects of low and high sodium intake appear to be under- and over-estimated, respectively, (3) large studies have found a U-shaped or J-shaped relationship between sodium intake and all-cause mortality, and (4) sodium consumption is unlikely to be the major driver behind the worldwide hypertension epidemic and has little effect on the blood pressure of most normotensive individuals, the recommendation for population-wide low sodium intake merits further evaluation. Citation: Research Methods in Medicine & Health Sciences PubDate: 2024-02-17T02:25:04Z DOI: 10.1177/26320843241235586
Authors:Matthew J Smith, Matteo Quartagno, Aurelien Belot, Bernard Rachet, Edmund Njeru Njagi Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. BackgroundMultiple imputation is often used to reduce bias and gain efficiency when there is missing data. The most appropriate imputation method depends on the model the analyst is interested in fitting. We consolidate and compare the performance and ease of use for several commonly implemented imputation approaches.MethodsUsing 1000 simulations, each with 10,000 observations, under six data-generating mechanisms (DGM), we investigate the performance of four methods: (i) ’passive imputation’, (ii) ’just another variable’ (JAV), (iii) ’stratify-impute-append’ (SIA), and (iv) ’substantive model compatible fully conditional specification’ (SMCFCS). The application of each method is shown in an empirical example using England-based cancer registry data.ResultsSMCFCS and SIA showed the least biased estimate of the coefficients for the fully, and partially, observed variable and the interaction term. SMCFCS and SIA showed good coverage and low relative error for all DGMs. SMCFCS had a large bias when there was a low prevalence of the fully observed variable in the interaction. SIA performed poorly when the fully observed variable in the interaction had a continuous underlying form.ConclusionSMCFCS and SIA give consistent estimation and either can be used in most analyses. SMCFCS performed better than SIA when the fully observed variable in the interaction had an underlying continuous form. Researchers should be cautious when using SMCFCS when there is a low prevalence of the fully observed variable in the interaction. Citation: Research Methods in Medicine & Health Sciences PubDate: 2024-02-16T02:45:31Z DOI: 10.1177/26320843231224809
Authors:Ellen Kingsley, Katie Biggs, Kiera Solaiman, Anna Packham, Roshanak Nekooi, Matthew Bursnall, Kirsty McKendrick, Cindy L Cooper, Barry Wright Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. BackgroundThe UK has seen a recent shift towards children’s mental health being supported and treated in school settings. Several current school-based interventions focus on autism and social skills, with education professional involvement in their delivery increasing. The study of these interventions poses specific implementation challenges. This paper discusses implementation successes and learnings from the I-SOCIALISE research study which delivered and evaluated efficacy of LEGO® based therapy (now Play Brick Therapy) for autistic children and young people delivered in schools. Detailed Methods and results of the trial are reported elsewhere.MethodsThe I-SOCIALISE study was a pragmatic large-scale NIHR-funded cluster randomised controlled trial. Children and young people, their parents/guardians, and schoolteachers or teaching assistants were recruited from mainstream schools in the UK. They completed outcome measures and were randomised to receive either 12-week of LEGO® based therapy and usual support or usual support only. Various methods to achieve successful recruitment and retention were used and learnings were documented.ResultsThe study recruited to time and target with successful delivery of this complex intervention in schools. Several lessons were learnt about recruitment methods, data collection, participant burden and retention, blinding, and the importance of relationships with key school contacts. Main recommendations based on these learnings are provided.ConclusionsThis study demonstrated that it is possible to undertake large scale, robust evaluation of pragmatically delivered complex school-based interventions. Recommendations are made to address the logistical challenges of undertaking research in this setting which are intended to facilitate future research. Citation: Research Methods in Medicine & Health Sciences PubDate: 2024-02-08T07:39:21Z DOI: 10.1177/26320843231224804
Authors:Louisa Anne Peters Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. IntroductionA realist literature review involves iterative processes, with searches, appraisal and synthesis occurring simultaneously. Whilst this lends itself to theory development, the synthesis process often lacks transparency. This has led to unanticipated challenges for novice realist researchers, particularly PhD candidates.MethodsThe aim of this paper is to contribute to the realist methodological knowledge base by outlining the analytical tools of coding, consolidating and conceptual mapping used within a realist review. Specifically, how these techniques aid the synthesis process and demonstrate the development of valid and evidence informed programme theories.ResultsA worked example is provided to illustrate how; (i) coding techniques using realist logic can evidence a rigorous synthesis process; (ii) the use of consolidating techniques facilitates data management and aids theory development; and (iii) conceptual mapping demonstrates programme theory development.ConclusionsRecommendations for novice realist researchers include defining and documenting the analytical tools for conducting a rigorous realist synthesis to provide transparency about how valid programme theories were developed. Plus, theory development checks can be built into the structure of the review process. Citation: Research Methods in Medicine & Health Sciences PubDate: 2024-02-07T11:36:46Z DOI: 10.1177/26320843231224807
Authors:Catherine Arundel, Charlie Welch, Puvanendran Tharmanathan, Joseph Dias Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. BackgroundWith attrition common in randomised trials, strategies are needed to minimise this. Many retention strategies include ‘thanks’ elements however there is currently no evidence of the effectiveness of a ‘thank you’ intervention separate to other trial activity or information. This Study Within A Trial (SWAT) sought to assess if a thank you card increases completion of the host trial primary outcome.MethodsA two arm SWAT, using a 1:1 (intervention:control) allocation ratio, embedded within the DISC trial. The primary outcome was the difference in retention rate at 1 year post-treatment. Secondary outcomes were outcome data completeness, cost, and retention at 2 years post-treatment. Analyses were conducted using logistic regression adjusting for SWAT and host trial allocation.ResultsA total of 358 participants were randomised and included in the SWAT analyses. Completion of the 1-year outcome visit was 89.7% (n = 157) in the intervention group and 90.2% (165) in the control group (adjusted odds ratio (OR) 0.95, 95% CI 0.48 to 1.90, p = .89). There was no evidence of a difference in completeness of key outcome data (adjusted OR 1.84, 95% CI 0.71 to 4.73, p = .20) or retention at 2 years post treatment (adjusted OR 1.13, 95% CI 0.59 to 2.17, p = .72).ConclusionIt remains unclear if thank you cards increased the rate of primary outcome follow-up completion within the DISC trial. However, as the first evaluation of a distinct ‘thank you’ intervention for improving retention rates, further replications are required to determine effectiveness, ideally in populations other than older, male, Caucasians. Citation: Research Methods in Medicine & Health Sciences PubDate: 2024-02-01T10:49:32Z DOI: 10.1177/26320843241229934
Authors:Chrysostomos Kalyvas, Katerina Papadimitropoulou, William Malbecq, Loukia M. Spineli Pages: 64 - 75 Abstract: Research Methods in Medicine & Health Sciences, Volume 5, Issue 3, Page 64-75, July 2024. BackgroundThe Health Technology Assessment agencies typically require an economic evaluation considering a lifetime horizon for interventions affecting survival. However, survival data are often censored and are typically analyzed assuming the censoring mechanism independent of the event process. This assumption may lead to biased results when the censoring mechanism is informative.MethodsWe propose a flexible approach to jointly model the participants experiencing an event and censored participants by incorporating the pattern-mixture (PM) model in the fractional polynomial (FP) model within the network meta-analysis (NMA) framework. We introduce the informative censoring hazard ratio parameter that quantifies the departure from the censored at random assumption. The FP-PM model is exemplified in an NMA of the overall survival from non-small cell lung carcinoma studies using Bayesian methods.ResultsThe results on hazard ratio and survival from the FP-PM model are similar to those from the FP model. However, the posterior standard deviation of the hazard ratio is slightly greater when censored data are modeled because the uncertainty induced by censoring is naturally accounted for in the FP-PM model. The between-study standard deviation is almost identical in both models due to the low censoring rate across the studies. At the end of the corresponding studies, the informative censoring hazard ratio demonstrated a possible departure from the censored at random assumption for gefitinib and best supportive care.ConclusionsThe proposed method offers a comprehensive sensitivity analysis framework to examine the robustness of the NMA results to clinically plausible censoring scenarios. Citation: Research Methods in Medicine & Health Sciences PubDate: 2023-07-14T10:15:36Z DOI: 10.1177/26320843231190026 Issue No:Vol. 5, No. 3 (2023)
Authors:Tad T Brunyé, Catherine E Konold, Jason Wang, Kathleen F Kerr, Trafton Drew, Hannah Shucard, Kim Soroka, Donald L Weaver, Joann G Elmore Pages: 76 - 82 Abstract: Research Methods in Medicine & Health Sciences, Volume 5, Issue 3, Page 76-82, July 2024. BackgroundIn pathology and other specialties of diagnostic medicine, longitudinal studies and competency assessments often involve physicians interpreting the same images multiple times. In these designs, a washout period is used to reduce the chances that later interpretations are influenced by prior exposure.Objective/sThe present study examines whether a washout period between 9 and 39 months is sufficient to prevent three effects of prior exposure when pathologists review digital breast tissue biopsies and render diagnostic decisions: faster case review durations, higher confidence, and lower perceived difficulty.MethodsIn a longitudinal breast pathology study, 48 resident pathologists reviewed a mix of five novel and five repeated digital whole slide images during Phase 2, occurring 9–39 months after an initial Phase 1 review. Importantly, cases that were repeated for some participants in Phase 2 were novel for other participants in Phase 2. We statistically tested for differences in participants’ case review duration, self-reported confidence, and self-reported difficulty in Phase 2 based on whether the case was novel or repeated.ResultsNo statistically significant difference in review time, confidence, or difficulty as a function of whether the case was repeated or novel in a Phase 2 review occurring 9-39 months after initial viewing; this same result was found in a subset of participants with a shorter (9-14-months) washout.ConclusionThese results provide evidence to support the efficacy of at least a 9-months washout period in the design of longitudinal medical imaging and informatics studies to ensure no detectable effect of initial exposure on participant’s subsequent case review. Citation: Research Methods in Medicine & Health Sciences PubDate: 2023-08-29T06:41:47Z DOI: 10.1177/26320843231199453 Issue No:Vol. 5, No. 3 (2023)
Authors:Diana González-Bermejo, Belén Castillo-Cano, Alfonso Rodríguez-Pascual, Pilar Rayón-Iglesias, Dolores Montero-Corominas, Consuelo Huerta-Álvarez Pages: 83 - 92 Abstract: Research Methods in Medicine & Health Sciences, Volume 5, Issue 3, Page 83-92, July 2024. BackgroundA substantial increase in the incidence of immediate release fentanyl (IRF) use was reported in Spain from 2012 to 2017.PurposeThis study aimed to investigate the relationship dynamically with cancer incidence in order to provide empirical evidence of inappropriate use of IRF with respect to the pathology.Research designA vector autoregresive (VAR) model was constructed using data from a nationwide electronic healthcare record database in primary care in Spain (BIFAP) according to the following step procedure: (1) split data into training data for modelling and test for validation (2) assessing for time series stationarity; (3) selecting lag-length; (4) building the VAR model; (5) assessing residual autocorrelation; (6) checking stability of the VAR system; (7) evaluating Granger causality; (8) impulse response analysis and forecast error variance decomposition (9) prediction performance with validation data.ResultsThe analysis showed a strong and linear correlation between IRF and cancer (Pearson correlation coefficient: 0.594 (95% CI: 0.420–0.726). Two VAR models, VAR (2) and VAR (11) were selected and compared. All tests performed for both models satisfied assumptions for stability, predictability and accuracy. Granger causality revealed cancer incidence is a good predictor for IRF use. VAR (2) seemed to be slightly more accurate, according to the RMSE of the test data.ConclusionsThis study demonstrates that using a robust and structured VAR modelling approach, is able to estimate dynamics associations, involving IRF use and cancer incidence. Citation: Research Methods in Medicine & Health Sciences PubDate: 2023-10-18T12:17:55Z DOI: 10.1177/26320843231206357 Issue No:Vol. 5, No. 3 (2023)
Authors:Lianne K Siegel, Milena Silva, Lifeng Lin, Yong Chen, Yu-Lun Liu, Haitao Chu Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. Two-step approaches for synthesizing proportions in a meta-analysis require first transforming the proportions to a scale where their distribution across studies can be approximated by a normal distribution. Commonly used transformations include the log, logit, arcsine, and Freeman-Tukey double-arcsine transformations. Alternatively, a generalized linear mixed model (GLMM) can be fit directly on the data using the exact binomial likelihood. Unlike popular two-step methods, this accounts for uncertainty in the within-study variances without a normal approximation and does not require an ad hoc correction for zero counts. However, GLMMs require choosing a link function; we illustrate how the AIC can be used to choose the best fitting link when different link functions give different results. We also highlight how misspecification of the link function can introduce bias; using an empirical sandwich estimator for the standard error may not sufficiently avoid undercoverage due to link function misspecification. We demonstrate the application of GLMMs and choice of link function using data from a systematic review on the prevalence of fever in children with COVID-19. Citation: Research Methods in Medicine & Health Sciences PubDate: 2023-12-27T09:02:34Z DOI: 10.1177/26320843231224808
Authors:Svetlana Cherlin, Theophile Bigirumurame, Michael J Grayling, Jérémie Nsengimana, Luke Ouma, Aida Santaolalla, Fang Wan, S Faye Williamson, James MS Wason Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. IntroductionEven in effectively conducted randomised trials, the probability of a successful study remains relatively low. With recent advances in the next-generation sequencing technologies, there is a rapidly growing number of high-dimensional data, including genetic, molecular and phenotypic information, that have improved our understanding of driver genes, drug targets, and drug mechanisms of action. The leveraging of high-dimensional data holds promise for increased success of clinical trials.MethodsWe provide an overview of methods for utilising high-dimensional data in clinical trials. We also investigate the use of these methods in practice through a review of recently published randomised clinical trials that utilise high-dimensional genetic data. The review includes articles that were published between 2019 and 2021, identified through the PubMed database.ResultsOut of 174 screened articles, 100 (57.5%) were randomised clinical trials that collected high-dimensional data. The most common clinical area was oncology (30%), followed by chronic diseases (28%), nutrition and ageing (18%) and cardiovascular diseases (7%). The most common types of data analysed were gene expression data (70%), followed by DNA data (21%). The most common method of analysis (36.3%) was univariable analysis. Articles that described multivariable analyses used standard statistical methods. Most of the clinical trials had two arms.DiscussionNew methodological approaches are required for more efficient analysis of the increasing amount of high-dimensional data collected in randomised clinical trials. We highlight the limitations and barriers to the current use of high-dimensional data in trials, and suggest potential avenues for improvement and future work. Citation: Research Methods in Medicine & Health Sciences PubDate: 2023-12-26T05:45:55Z DOI: 10.1177/26320843231186399
Authors:Chang Xu, Lifeng Lin Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. ObjectiveThe common approach to meta-analysis with double-zero studies is to remove such studies. Our previous work has confirmed that exclusion of these studies may impact the results. In this study, we undertook extensive simulations to investigate how the results of meta-analyses would be impacted in relation to the proportion of such studies.MethodsTwo standard generalized linear mixed models (GLMMs) were employed for the meta-analysis. The statistical properties of the two GLMMs were first examined in terms of percentage bias, mean squared error, and coverage. We then repeated all the meta-analyses after excluding double-zero studies. Direction of estimated effects and p-values for including against excluding double-zero studies were compared in nine ascending groups classified by the proportion of double-zero studies within a meta-analysis.ResultsBased on 50,000 simulated meta-analyses, the two GLMMs almost achieved unbiased estimation and reasonable coverage in most of the situations. When excluding double-zero studies, 0.00%–4.47% of the meta-analyses changed the direction of effect size, and 0.61%–8.78% changed direction of the significance of p-value. When the proportion of double-zero studies increased in a meta-analysis, the probability of the effect size changed the direction increased; when the proportion was about 40%–60%, it has the largest impact on the change of p-values.ConclusionDouble-zero studies can impact the results of meta-analysis and excluding them may be problematic. The impact of such studies on meta-analysis varies by the proportion of such studies within a meta-analysis. Citation: Research Methods in Medicine & Health Sciences PubDate: 2023-12-20T11:09:47Z DOI: 10.1177/26320843231176661
Authors:Jian Gao Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. Linear regression is a simple yet powerful tool that has been extensively used in all fields where the relationships among variables are of interest. When linear regression is applied, the coefficient of determination or R-squared (R2) is commonly reported as a metric gauging the model’s goodness of fit. Despite its wide usage, however, R2 has been commonly misinterpreted as the proportion or percent of variation in the dependent variable that is explained by the independent variables (PVE -- percent of variation explained). This study demonstrated R2 substantially overstates the true PVE. When the assumptions of linear regression are met, R2 overstates PVE by up to 100%. For instance, when R2 is 0.99, 0.80, 0.50, or 0.10, the true PVE is 0.9, 0.55, 0.29, or 0.05, respectively. The misinterpretation of R2, which greatly exaggerates the effect of the interventions or causes on the outcomes, could exert undue influence on clinical decisions in medicine and policy decisions in other fields such as environmental protection and climate change research. Therefore, when linear regression is applied, reporting the true PVE is warranted. Citation: Research Methods in Medicine & Health Sciences PubDate: 2023-12-02T09:09:52Z DOI: 10.1177/26320843231186398
Authors:Kelsey L Schertz, Megan Petrik, Mariah Branson, Steven S Fu, Alexander J Rothman, Abbie Begnaud, Anne M Joseph Abstract: Research Methods in Medicine & Health Sciences, Ahead of Print. BackgroundClinical trials involving pharmacologic or behavioral treatments often assess depression and suicidal ideation for purposes of screening, baseline assessment of potential moderators or mediators of treatment, or as a study outcome, even if the primary condition under study is not a mental health disorder. Suicide risk management in the context of clinical research poses significant clinical, ethical, and practical challenges, and the literature provides little guidance with respect to outcomes of suicide risk management protocols (SRMPs) or suicide risk assessment instruments deployed in the clinical research setting.MethodsWe report our experience using a novel SRMP in the Program for Lung Cancer Screening and Tobacco Cessation (PLUTO) trial through in-person and remote interactions.ResultsAn SRMP was developed for non-clinical research staff to assess and respond to participants who express suicidal ideation. Between September 2016 and April 2021, the SRMP was used 61 times for 59 individuals. The SRMP was activated by explicit probing of suicidal ideation in 46 of 61 uses (75%). Subject risk was categorized as high-risk in 6 of 61 SRMP uses (10%).ConclusionOur findings demonstrate a useful tool for the management of suicidal ideation and behavior in a clinical trial. Suicidal ideation may be endorsed by only a small number of study participants, however participant safety dictates the need to develop and implement a practical SRMP. These findings may be of relevance to researchers collecting patient reported outcomes remotely. Researchers should consider available resources for SRMPs during design and start-up phases of research. Citation: Research Methods in Medicine & Health Sciences PubDate: 2023-11-04T07:26:26Z DOI: 10.1177/26320843231212427