A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> PSYCHOLOGY (Total: 983 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
Neuropsychology
Journal Prestige (SJR): 1.472
Citation Impact (citeScore): 3
Number of Followers: 32  
 
  Full-text available via subscription Subscription journal
ISSN (Print) 0894-4105 - ISSN (Online) 1931-1559
Published by APA Homepage  [89 journals]
  • Harmonization of neuropsychological and other clinical endpoints: Pitfalls
           and possibilities.

    • Free pre-print version: Loading...

      Abstract: This special issue brings together different methods for improving harmonization of existing (i.e., legacy) and future research data. We expect that when these methods are fully deployed, they will benefit research on various clinical conditions by allowing researchers to explore more nuanced questions using larger and more ethnically, socially, and economically diverse samples than previously available. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
      PubDate: Mon, 03 Apr 2023 00:00:00 GMT
      DOI: 10.1037/neu0000895
       
  • Concurrent validity and reliability of suicide risk assessment
           instruments: A meta-analysis of 20 instruments across 27 international
           cohorts.

    • Free pre-print version: Loading...

      Abstract: Objective: A major limitation of current suicide research is the lack of power to identify robust correlates of suicidal thoughts or behavior. Variation in suicide risk assessment instruments used across cohorts may represent a limitation to pooling data in international consortia. Method: Here, we examine this issue through two approaches: (a) an extensive literature search on the reliability and concurrent validity of the most commonly used instruments and (b) by pooling data (N ∼ 6,000 participants) from cohorts from the Enhancing NeuroImaging Genetics Through Meta-Analysis (ENIGMA) Major Depressive Disorder and ENIGMA–Suicidal Thoughts and Behaviour working groups, to assess the concurrent validity of instruments currently used for assessing suicidal thoughts or behavior. Results: We observed moderate-to-high correlations between measures, consistent with the wide range (κ range: 0.15–0.97; r range: 0.21–0.94) reported in the literature. Two common multi-item instruments, the Columbia Suicide Severity Rating Scale and the Beck Scale for Suicidal Ideation were highly correlated with each other (r = 0.83). Sensitivity analyses identified sources of heterogeneity such as the time frame of the instrument and whether it relies on self-report or a clinical interview. Finally, construct-specific analyses suggest that suicide ideation items from common psychiatric questionnaires are most concordant with the suicide ideation construct of multi-item instruments. Conclusions: Our findings suggest that multi-item instruments provide valuable information on different aspects of suicidal thoughts or behavior but share a modest core factor with single suicidal ideation items. Retrospective, multisite collaborations including distinct instruments should be feasible provided they harmonize across instruments or focus on specific constructs of suicidality. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
      PubDate: Mon, 03 Apr 2023 00:00:00 GMT
      DOI: 10.1037/neu0000850
       
  • Measurement fidelity of clinical assessment methods in a global study on
           identifying reproducible brain signatures of obsessive–compulsive
           disorder.

    • Free pre-print version: Loading...

      Abstract: Objective: To describe the steps of ensuring measurement fidelity of core clinical measures in a five-country study on brain signatures of obsessive–compulsive disorder (OCD). Method: We collected data using standardized instruments, which included the Yale–Brown Obsessive–Compulsive Scale (YBOCS), the Dimensional YBOCS (DYBOCS), the Brown Assessment of Beliefs Scale (BABS), the 17-item Hamilton Depression Scale (HAM-D), the Hamilton Anxiety Scale (HAM-A), and the Structured Clinical Interview for DSM-5 (SCID). Steps to ensure measurement fidelity included translating instruments, developing a clinical decision manual, and continuing reliability training with 11–13 transcripts of each instrument by 13 independent evaluators across sites over 4 years. We use multigroup confirmatory factor analysis (MGCFA) to report interrater reliability (IRR) among the evaluators and factor structure for each scale in 206 participants with OCD. Results: The overall IRR for most scales was high (ICC> 0.94) and remained good to excellent throughout the study. Consistent factor structures (configural invariance) were found for all instruments across the sites, while similarity in the factor loadings for the items (metric invariance) could be established only for the DYBOCS and the BABS. Conclusions: It is feasible to achieve measurement fidelity of clinical measures in multisite, multilinguistic global studies, despite the challenges inherent to such endeavors. Future studies should not only report IRR but also consider reporting methods of standardization of data collection and measurement invariance to identify factor structures of core clinical measures. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
      PubDate: Mon, 28 Nov 2022 00:00:00 GMT
      DOI: 10.1037/neu0000849
       
  • Cross-national harmonization of neurocognitive assessment across five
           sites in a global study.

    • Free pre-print version: Loading...

      Abstract: Objective: Cross-national work on neurocognitive testing has been characterized by inconsistent findings, suggesting the need for improved harmonization. Here, we describe a prospective harmonization approach in an ongoing global collaborative study. Method: Visuospatial N-Back, Tower of London (ToL), Stop Signal task (SST), Risk Aversion (RA), and Intertemporal Choice (ITC) tasks were administered to 221 individuals from Brazil, India, the Netherlands, South Africa, and the USA. Prospective harmonization methods were employed to ensure procedural similarity of task implementation and processing of derived task measures across sites. Generalized linear models tested for between-site differences controlling for sex, age, education, and socioeconomic status (SES). Associations with these covariates were also examined and tested for differences by site with site-by-covariate interactions. Results: The Netherlands site performed more accurately on N-Back and ToL than the other sites, except for the USA site on the N-Back. The Netherlands and the USA sites performed faster than the other three sites during the go events in the SST. Finally, the Netherlands site also exhibited a higher tolerance for delay discounting than other sites on the ITC, and the India site showed more risk aversion than other sites on the RA task. However, effect size differences across sites on the five tasks were generally small (i.e., partial eta-squared < 0.05) after dropping the Netherlands (on ToL, N-Back, ITC, and SST tasks) and India (on the RA task). Across tasks, regardless of site, the N-Back (sex, age, education, and SES), ToL (sex, age, and SES), SST (age), and ITC (SES) showed associations with covariates. Conclusions: Four out of the five sites showed only small between-site differences for each task. Nevertheless, despite our extensive prospective harmonization steps, task score performance deviated from the other sites in the Netherlands site (on four tasks) and the India site (on one task). Because the procedural methods were standardized across sites, and our analyses were adjusted for covariates, the differences found in cognitive performance may indicate selection sampling bias due to unmeasured confounders. Future studies should follow similar cross-site prospective harmonization procedures when assessing neurocognition and consider measuring other possible confounding variables for additional statistical control. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
      PubDate: Mon, 04 Jul 2022 00:00:00 GMT
      DOI: 10.1037/neu0000838
       
  • Binary classification threatens the validity of cognitive impairment
           detection.

    • Free pre-print version: Loading...

      Abstract: Objective: Neuropsychological literature reports varying prevalence of cognitive impairment within patient populations, despite assessment with standardized neuropsychological tests. Within the domain of oncology, the International Cognition and Cancer Task Force (ICCTF) proposed standard cutoff points to harmonize the operationalization of cognitive impairment. We evaluated how this binary classification affects agreement between two highly comparable test batteries. Method: Two hundred non-central nervous system (non-CNS) cancer patients who finished treatment (56% females; median age 53 yrs) completed traditional tests and their online equivalents in a counterbalanced design. Following ICCTF standards, impairment was defined as a score of ≥ 1.5 standard deviations (SDs) below normative means on two tests and/or ≥ 2 SDs below normative means on one test. Agreement of classification between traditional and online assessment was evaluated using Cohen’s κ. Additional Monte Carlo simulations were conducted to demonstrate how different cutoff points and test characteristics affect agreement. Results: The correlation between total scores of traditional and online assessment was .78. Proportions of impaired patients did not differ between assessment methods: 40% using traditional tests and 38% using online equivalents, χ²(1) = .17, p < .68. Nevertheless, within-person agreement in impairment classification between traditional and online assessment was merely fair (K = .35). Monte Carlo simulations showed similarly low agreement scores (K = .41 for 1.5 SD; K = .33 for 2 SD criterion). Conclusions: Our results show that binary classification can lead to a situation where two highly similar batteries fail to identify the same individuals as impaired. Additional simulations suggest that within-person agreement between assessment methods using binary classification is inherently low. Modern statistical tools may help to improve validity of impairment detection. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
      PubDate: Mon, 04 Jul 2022 00:00:00 GMT
      DOI: 10.1037/neu0000831
       
  • Harmonization of the English and Spanish versions of the NIH Toolbox
           Cognition Battery crystallized and fluid composite scores.

    • Free pre-print version: Loading...

      Abstract: Objective: The National Institutes of Health Toolbox Cognition Battery (NIHTB-CB) has both English- and Spanish-language versions producing crystallized and fluid cognition composite scores. This study examined measurement invariance between languages of administration. If established, measurement invariance would indicate that the composite scores measure the same construct across languages and provide scores that can be meaningfully compared and harmonized in future analyses. Method: Participants from the NIHTB-CB normative sample included adults tested in English (n = 1,038; M = 49.1 years old, SD = 18.6) or Spanish (n = 408; M = 44.1 years old, SD = 16.7). Participants completed seven NIHTB-CB tests: Two measuring crystallized cognition and five measuring fluid cognition. Each test score was converted to an age-adjusted standard score or demographic-adjusted T score. A two-factor model (i.e., crystallized cognition and fluid cognition factors) was evaluated using confirmatory factor analysis. Measurement invariance was evaluated by fitting the two-factor model for each language of administration and constraining model parameters to be equivalent across languages, testing configural, weak, strong, and strict models. Results: For age-adjusted and demographic-adjusted scores, the two-factor model fit adequately well, and each factor had adequate reliability among English- and Spanish-speaking participants. Strict invariance was established across languages of administration for both age-adjusted and demographic-adjusted scores. Conclusions: These findings support the harmonization of the English- and Spanish-language NIHTB-CB crystallized and fluid composite scores, indicating that the composite scores measure the same constructs on the same scale. The results support future studies merging data from participants evaluated in both languages. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
      PubDate: Thu, 02 Jun 2022 00:00:00 GMT
      DOI: 10.1037/neu0000822
       
  • Challenges and opportunities for harmonization of cross-cultural
           neuropsychological data.

    • Free pre-print version: Loading...

      Abstract: Objective: In this position article, we highlight the importance of considering cultural and linguistic variables that influence neuropsychological test performance and the possible moderating impact on our understanding of brain/behavior relationships. Increasingly, neuropsychologists are realizing that cultural and language differences between countries, regions, and ethnic groups influence neuropsychological outcomes, as test scores may not have the same interpretative meaning across cultures. Furthermore, attempts to apply the same norms across diverse populations without accounting for culture and language variations will result in detrimental ethical dilemmas, such as misdiagnosis of clinical conditions and inaccurate interpretations of research outcomes. Given the lack of normative data for ethnically and linguistically diverse communities, it is often challenging to merge data across diverse populations to investigate research questions of global significance. Methodological Considerations: We highlight some of the inherent challenges, limitations, and opportunities for efforts to harmonize cross-cultural neuropsychological data. We also explore some of the cultural factors that should be considered when attempting to harmonize cross-cultural neuropsychological data, sources of variance that should be accounted for in data analyses, and the need to identify evaluative criteria for interpreting data outcomes of cross-cultural harmonization approaches. Conclusion: In the future, it will be important to further solidify principles for aggregating data across diverse cultural and linguistic cohorts, validate whether assumptions are being satisfied regarding the relationship between neuropsychological measures and the brain and/or behavior of individuals from diverse cultural and linguistic backgrounds, as well as methods for evaluating relative successful validation for data harmonization efforts. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
      PubDate: Thu, 12 May 2022 00:00:00 GMT
      DOI: 10.1037/neu0000818
       
  • A cultural neuropsychological approach to harmonization of cognitive data
           across culturally and linguistically diverse older adult populations.

    • Free pre-print version: Loading...

      Abstract: Objective: To describe a cultural neuropsychological approach to prestatistical harmonization of cognitive data across the United States (U.S.) and Mexico with the Harmonized Cognitive Assessment Protocol (HCAP). Method: We performed a comprehensive review of the administration, scoring, and coding procedures for each cognitive test item administered across the English and Spanish versions of the HCAP in the Health and Retirement Study (HRS) in the U.S. and the Ancillary Study on Cognitive Aging in Mexico (Mex-Cog). For items that were potentially equivalent across studies, we compared each cognitive test item for linguistic and cultural equivalence and classified items as confident or tentative linking items, based on the degree of confidence in their comparability across cohorts and language groups. We evaluated these classifications using differential item functioning techniques. Results: We evaluated 132 test items among 21 cognitive instruments in the HCAP across the HRS and Mex-Cog. We identified 72 confident linking items, 46 tentative linking items, and 14 items that were not comparable across cohorts. Measurement invariance analysis revealed that 64% of the confident linking items and 83% of the tentative linking items showed statistical evidence of measurement differences across cohorts. Conclusions: Prestatistical harmonization of cognitive data, performed by a multidisciplinary and multilingual team including cultural neuropsychologists, can identify differences in cognitive construct measurement across languages and cultures that may not be identified by statistical procedures alone. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
      PubDate: Thu, 28 Apr 2022 00:00:00 GMT
      DOI: 10.1037/neu0000816
       
  • Impact of word properties on list learning: An explanatory item analysis.

    • Free pre-print version: Loading...

      Abstract: Objective: A variety of factors affect list learning performance and relatively few studies have examined the impact of word selection on these tests. This study examines the effect of both language and memory processing of individual words on list learning. Method: Item-response data from 1,219 participants, Mage = 74.41 (SD = 7.13), Medu = 13.30 (SD = 2.72), in the Harmonized Cognitive Assessment Protocol were used. A Bayesian generalized (non)linear multilevel modeling framework was used to specify the measurement and explanatory item-response theory models. Explanatory effects on items due to learning over trials, serial position of words, and six word properties obtained through the English Lexicon Project were modeled. Results: A two parameter logistic (2PL) model with trial-specific learning effects produced the best measurement fit. Evidence of the serial position effect on word learning was observed. Robust positive effects on word learning were observed for body-object integration while robust negative effects were observed for word frequency, concreteness, and semantic diversity. A weak negative effect of average age of acquisition and a weak positive effect for the number of phonemes in the word were also observed. Conclusions: Results demonstrate that list learning performance depends on factors beyond the repetition of words. Identification of item factors that predict learning could extend to a range of test development problems including translation, form equating, item revision, and item bias. In data harmonization efforts, these methods can also be used to help link tests via shared item features and testing of whether these features are equally explanatory across samples. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
      PubDate: Thu, 21 Apr 2022 00:00:00 GMT
      DOI: 10.1037/neu0000810
       
  • Development and application of the International Classification of
           Cognitive Disorders in Epilepsy (IC-CoDE): Initial results from a
           multi-center study of adults with temporal lobe epilepsy.

    • Free pre-print version: Loading...

      Abstract: [Correction Notice: An Erratum for this article was reported online in Neuropsychology on Sep 15 2022 (see record 2023-01997-001). In the original article, there was an error in Figure 2. In the box at the top left of the figure, the fourth explanation incorrectly stated, “Generalized impairment = At least one test < −1.0 or −1.5SD in three or more domains.” The correct wording is “Generalized impairment = At least two tests < −1.0 or −1.5SD in each of three or more domains.” All versions of this article have been corrected.] Objective: To describe the development and application of a consensus-based, empirically driven approach to cognitive diagnostics in epilepsy research—The International Classification of Cognitive Disorders in Epilepsy (IC-CoDE) and to assess the ability of the IC-CoDE to produce definable and stable cognitive phenotypes in a large, multi-center temporal lobe epilepsy (TLE) patient sample. Method: Neuropsychological data were available for a diverse cohort of 2,485 patients with TLE across seven epilepsy centers. Patterns of impairment were determined based on commonly used tests within five cognitive domains (language, memory, executive functioning, attention/processing speed, and visuospatial ability) using two impairment thresholds (≤1.0 and ≤1.5 standard deviations below the normative mean). Cognitive phenotypes were derived across samples using the IC-CoDE and compared to distributions of phenotypes reported in existing studies. Results: Impairment rates were highest on tests of language, followed by memory, executive functioning, attention/processing speed, and visuospatial ability. Application of the IC-CoDE using varying operational definitions of impairment (≤ 1.0 and ≤ 1.5 SD) produced cognitive phenotypes with the following distribution: cognitively intact (30%–50%), single-domain (26%–29%), bi-domain (14%–19%), and generalized (10%–22%) impairment. Application of the ≤ 1.5 cutoff produced a distribution of phenotypes that was consistent across cohorts and approximated the distribution produced using data-driven approaches in prior studies. Conclusions: The IC-CoDE is the first iteration of a classification system for harmonizing cognitive diagnostics in epilepsy research that can be applied across neuropsychological tests and TLE cohorts. This proof-of-principle study in TLE offers a promising path for enhancing research collaborations globally and accelerating scientific discoveries in epilepsy. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
      PubDate: Thu, 27 Jan 2022 00:00:00 GMT
      DOI: 10.1037/neu0000792
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.233.219.103
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-