for Journals by Title or ISSN
for Articles by Keywords
help

Publisher: Hogrefe and Huber Publishing Group   (Total: 32 journals)

Aviation Psychology and Applied Human Factors     Hybrid Journal   (Followers: 5)
Crisis: The J. of Crisis Intervention and Suicide Prevention     Hybrid Journal   (Followers: 15, SJR: 0.58, h-index: 27)
Diagnostica     Hybrid Journal   (Followers: 1, SJR: 0.373, h-index: 27)
European J. of Psychological Assessment     Hybrid Journal   (Followers: 2, SJR: 0.755, h-index: 30)
European Psychologist     Hybrid Journal   (Followers: 6, SJR: 0.543, h-index: 22)
Experimental Psychology     Hybrid Journal   (Followers: 16, SJR: 1.134, h-index: 33)
Forum Psychotherapeutische Praxis     Hybrid Journal   (Followers: 1)
Frühe Bildung     Hybrid Journal   (Followers: 1)
GeroPsych: The J. of Gerontopsychology and Geriatric Psychiatry     Hybrid Journal   (Followers: 3, SJR: 0.155, h-index: 9)
J. of Individual Differences     Hybrid Journal   (Followers: 11, SJR: 0.566, h-index: 14)
J. of Media Psychology     Hybrid Journal   (Followers: 10, SJR: 0.927, h-index: 7)
J. of Personnel Psychology     Hybrid Journal   (Followers: 7, SJR: 0.823, h-index: 5)
J. of Psychophysiology     Hybrid Journal   (Followers: 1)
Kindheit und Entwicklung     Hybrid Journal   (SJR: 0.799, h-index: 25)
Lernen und Lernstörungen     Hybrid Journal  
Methodology: European J. of Research Methods for the Behavioral and Social Sciences     Hybrid Journal   (Followers: 12, SJR: 0.334, h-index: 13)
Musik- Tanz und Kunsttherapie     Hybrid Journal  
Psychologische Rundschau     Hybrid Journal   (Followers: 3, SJR: 0.236, h-index: 13)
Rorschachiana     Hybrid Journal   (SJR: 0.144, h-index: 3)
Social Psychology     Hybrid Journal   (Followers: 8, SJR: 0.868, h-index: 14)
SUCHT - Zeitschrift für Wissenschaft und Praxis / J. of Addiction Research and Practice     Hybrid Journal  
Zeitschrift für Psychologie / J. of Psychology     Hybrid Journal   (Followers: 1)
Zeitschrift für Arbeits - und Organisationspsychologie A&O     Hybrid Journal   (Followers: 2, SJR: 0.164, h-index: 10)
Zeitschrift für Entwicklungspsychologie und Pädagogische Psychologie     Hybrid Journal   (SJR: 0.261, h-index: 13)
Zeitschrift für Gesundheitspsychologie     Hybrid Journal   (SJR: 0.2, h-index: 4)
Zeitschrift für Kinder- und Jugendpsychiatrie und Psychotherapie     Hybrid Journal   (Followers: 1)
Zeitschrift für Klinische Psychologie und Psychotherapie     Hybrid Journal   (SJR: 0.231, h-index: 19)
Zeitschrift für Neuropsychologie     Hybrid Journal  
Zeitschrift für Pädagogische Psychologie     Full-text available via subscription  
Zeitschrift für Psychiatrie, Psychologie und Psychotherapie     Full-text available via subscription  
Zeitschrift für Psychologie     Hybrid Journal   (Followers: 2)
Zeitschrift für Sportpsychologie     Hybrid Journal   (Followers: 1, SJR: 0.173, h-index: 4)
Journal Cover European Journal of Psychological Assessment
   [4 followers]  Follow    
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
     ISSN (Print) 1015-5759 - ISSN (Online) 2151-2426
     Published by Hogrefe and Huber Publishing Group Homepage  [32 journals]   [SJR: 0.755]   [H-I: 30]
  • How Test Takers See Test Examiners
    • Abstract: We addressed potential test takers’ preferences for women or men as examiners as well as how examiners were perceived depending on their gender. We employed an online design with 375 students who provided preferences for and ratings of examiners based on short video clips. The clips showed four out of 15 psychologists who differed in age (young vs. middle-aged) and gender giving an introduction to a fictional intelligence test session. Employing multivariate multilevel analyses we found female examiners to be perceived as more social competent and middle-aged examiners being perceived as more competent. Data analyses revealed a significant preference for choosing women as examiners. Results were discussed with reference to test performance and fairness.
      Content Type Journal Article
      Category Original Article
      Pages 1-9

      DOI 10.1027/1015-5759/a000232

      Authors
      Isabella Vormittag, Department of Education and Psychology, Division for Psychological Assessment, Free University Berlin, Germany
      Tuulia M. Ortner, Department of Psychology, Division for Psychological Assessment, University of Salzburg, Austria
      Tobias Koch, Department of Education and Psychology, Division for Methods and Evaluation, Free University Berlin, Germany
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Wed, 10 Dec 2014 16:21:15 GMT
       
  • The Dutch Symptom Checklist-90-Revised
    • Abstract: The Symptom Checklist-90-Revised (SCL-90-R; Derogatis, 1977, 1994) was constructed to measure both general psychological distress and specific primary symptoms of distress. In this study, we evaluated to what extent the scale scores of the Dutch SCL-90-R reflect general and/or specific aspects of psychological distress in a psychiatric outpatients sample (N = 1,842), using a hierarchical factor model. The results revealed that the total scale score measures general psychological distress, with high reliability. The subscale scores Sleep Difficulties, Agoraphobia, Hostility, and Somatization reflect the specific primary symptoms reasonably well, with high reliability. The subscale score Depression hardly measures specific symptoms of distress, but instead a very common construct as is measured with the total scale of the SCL-90-R. The use of the Depression subscale score beyond the total scale score of the SCL-90-R appears therefore of limited value in clinical practice.
      Content Type Journal Article
      Category Original Article
      Pages 1-9

      DOI 10.1027/1015-5759/a000233

      Authors
      Iris A. M. Smits, University of Groningen, Faculty of Behavioural and Social Sciences, The Netherlands
      Marieke E. Timmerman, University of Groningen, Faculty of Behavioural and Social Sciences, The Netherlands
      Dick P. H. Barelds, University of Groningen, Faculty of Behavioural and Social Sciences, The Netherlands
      Rob R. Meijer, University of Groningen, Faculty of Behavioural and Social Sciences, The Netherlands
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Tue, 09 Dec 2014 13:26:42 GMT
       
  • Detection of Differential Item Functioning in the Cornell Critical
           Thinking Test Between Turkish and United States Students
    • Abstract: Critical thinking is a broad term that includes core elements such as reasoning, evaluating, and metacognition that should be transferred to students in educational systems. The integration of such skills into models of student success is increasing on an international scale. The Cornell Critical Thinking Test is an internationally used tool to assess critical thinking skills. However, limited validity evidence of the translated versions of the instrument exists to support the inferences based on the CCTT scores. This study examined the CCTT Turkish version. Specifically, translated items were examined for measurement equivalence by determining if items function differently across students from United States and Turkey. Differential Item Functioning (DIF) analysis via logistic regression was employed. Results demonstrated that each subtest contained DIF items and 10% of the items in the instrument were identified as DIF. Mean differences between students in each country were not influenced by these items. A critical content review of the translated item gave insight as to why items may be functioning differently.
      Content Type Journal Article
      Category Original Article
      Pages 1-9

      DOI 10.1027/1015-5759/a000230

      Authors
      Hafize Sahin, Washington State University, Pullman, WA, USA
      Brian F. French, Washington State University, Pullman, WA, USA
      Brian Hand, University of Iowa, Iowa City, IA, USA
      Murat Gunel, TED University, Ankara, Turkey
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Tue, 09 Dec 2014 13:26:42 GMT
       
  • Brief Form of the Interpersonal Competence Questionnaire (ICQ-15)
    • Abstract: The Interpersonal Competence Questionnaire (ICQ) developed by Buhrmester and colleagues (1988) in the US assesses the multidimensional construct of social competence via five distinct, but related subscales. Two versions comprising 40 and 30 items, respectively, are available in German. The purpose of the current study is to develop and validate a brief version of the ICQ among a large adult sample that is representative of the German general population. Data were collected from 2,009 participants. Three confirmatory factor analyses (CFA) were conducted in order to develop and validate the ICQ-15. Cronbach’s alpha coefficients were computed for the ICQ-15. An initial CFA with the ICQ-30 formed the basis for the selection of the items to be included in the ICQ-15. Two subsequent CFA’s with the ICQ-15 revealed an excellent fit of the hypothesized five-factor model to the observed data. Internal consistency coefficients were in the adequate range. This preliminary evaluation shows that the ICQ-15 is a structurally valid measure of interpersonal competence recommended for research contexts with limited assessment time and for psychotherapy progress tracking in clinical settings.
      Content Type Journal Article
      Category Original Article
      Pages 1-8

      DOI 10.1027/1015-5759/a000234

      Authors
      Adina Coroiu, Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada
      Alexandra Meyer, Department of Psychosomatic Medicine and Psychotheraphy, Universal Medical Center, Mainz, Germany
      Carlos A. Gomez-Garibello, Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada
      Elmar Brähler, Department of Psychosomatic Medicine and Psychotheraphy, Universal Medical Center, Mainz, Germany
      Aike Hessel, Pension Insurance Oldenburg-Bremen, Coordination Management – Social Medicine, Bremen, Germany
      Annett Körner, Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Tue, 09 Dec 2014 13:26:42 GMT
       
  • The Answer Is Blowing in the Wind
    • Abstract: This study examined the effects of weather on personality self-ratings. Single-assessment data were derived from the German General Social Survey conducted in 2008. For a subset of the participants (N = 478), official weather station data for the day a personality inventory was completed could be determined. Among these respondents, 140 (29%) completed the personality inventory on an unambiguously sunny day, 59 (12%) completed the measure on an unambiguously rainy day, and 279 (59%) completed the questionnaire on a day characterized by mixed weather conditions. Results revealed that self-ratings for some personality domains differed depending on the weather conditions on the day the inventory was completed. When compared with corresponding self-ratings collected under mixed weather conditions, ratings for the Big Five dimension of Openness to Experience were significantly lower on rainy days and ratings for Conscientiousness were significantly lower on sunny days. These results are suggestive of some limitations on the assumed situational independence of trait ratings.
      Content Type Journal Article
      Category Original Article
      Pages 1-7

      DOI 10.1027/1015-5759/a000236

      Authors
      Beatrice Rammstedt, GESIS – Leibniz Institute for the Social Sciences, Mannheim, Germany
      Michael Mutz, Georg-August-University Göttingen, Germany
      Richard F. Farmer, Oregon Research Institute, Eugene, OR, USA
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Tue, 09 Dec 2014 13:26:41 GMT
       
  • Experience and Diagnostic Anchors in Referral Letters
    • Abstract: The present study investigated whether diagnostic anchors, that is: diagnoses suggested in referral letters, influence judgments made by clinical psychologists with different levels of experience. Moderately experienced clinicians (N = 98) and very experienced clinicians (n = 126) were randomly assigned to reading a referral letter suggesting either depression or anxiety, or no referral letter. They then read a psychiatric report about a depressed patient, and gave a preliminary and final diagnosis. Results showed that the correctness of the diagnoses by very experienced clinicians was unaffected by the referral diagnosis. Moderately experienced clinicians did use the suggested diagnosis as anchor; when they had read a referral letter suggesting depressive complaints they were more inclined to classify the patient with a depressive disorder. In conclusion, the diagnosis in a referral letter influences the diagnostic decision made by moderately experienced clinicians.
      Content Type Journal Article
      Category Original Article
      Pages 1-7

      DOI 10.1027/1015-5759/a000235

      Authors
      Nanon L. Spaanjaars, Diagnostic Decision Making, Behavioural Science Institute, Radboud University Nijmegen, The Netherlands
      Marleen Groenier, Instructional Technology, University of Twente, Enschede, The Netherlands
      Monique O. M. van de Ven, Diagnostic Decision Making, Behavioural Science Institute, Radboud University Nijmegen, The Netherlands
      Cilia L. M. Witteman, Diagnostic Decision Making, Behavioural Science Institute, Radboud University Nijmegen, The Netherlands
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Tue, 09 Dec 2014 13:26:41 GMT
       
  • Factor Structure of the Ruminative Responses Scale
    • Abstract: The 10-item Ruminative Responses Scale is used to measure two facets of rumination: brooding and reflection. These subscales are used to seek differential correlations with other variables of interest (e.g., depression). The validity of these facets, however, is questionable because brooding and reflection were distinguished based on factor analyses, but subsequent analyses have been inconsistent. We investigated these facets using factor analyses in a large community-based sample (N = 625). Other measures of rumination and depression were used as criteria for validity analyses. Only the brooding items formed a robust scale. A consistent reflection factor did not emerge. Brooding showed convergent validity with other measures of rumination as well as depression, all rs > .4. Brooding was also higher among participants with a history of depression compared with never-depressed participants. Implications for the interpretation of past research and for conducting future research are discussed.
      Content Type Journal Article
      Category Original Article
      Pages 1-7

      DOI 10.1027/1015-5759/a000231

      Authors
      James W. Griffith, Department of Medical Social Sciences, Northwestern University, Chicago, IL, USA
      Filip Raes, Centre for Learning and Experimental Psychopathology, KU Leuven, Belgium
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Tue, 09 Dec 2014 13:26:41 GMT
       
  • Stop and State Your Intentions!
    • Abstract: Stop and State Your Intentions!
      Content Type Journal Article
      Category Editorial
      Pages 239-242

      DOI 10.1027/1015-5759/a000228

      Authors
      Matthias Ziegler, Humboldt-Universität zu Berlin, Germany
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      Journal Volume Volume 30
      Journal Issue Volume 30, Number 4 / 2014
      PubDate: Fri, 07 Nov 2014 13:58:12 GMT
       
  • Ad Hoc Reviewers 2014
    • Abstract: Ad Hoc Reviewers 2014
      Content Type Journal Article
      Category Volume Information
      Pages 315-316

      DOI 10.1027/1015-5759/a000229
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      Journal Volume Volume 30
      Journal Issue Volume 30, Number 4 / 2014
      PubDate: Fri, 07 Nov 2014 13:58:12 GMT
       
  • Substance and Artifact in Interest Self-Reports
    • Abstract: Although self-reports are often distorted by response biases, nearly all knowledge about interests rely on self-reports. This multiple-rater twin study investigated the degree to which interest self-reports reflect substance. Specifically, we examined whether genetic variance in interest self-reports reflect substance in terms of genetically based motivational attributes or artifact in terms of genetically influenced self-rater biases. We compared normative and ipsatized self- and peer reports on interests from 844 individuals (incl. 225 monozygotic and 113 dizygotic twin pairs) regarding psychometric qualities and further regarding the estimates of genetic and environmental components in self-other agreement and self-rater specificity. Ipsatized interest scores showed lower internal consistency but higher consensus and self-other agreement. Self-other agreement showed a large genetic component, whereas variance specific to self-reports was not significantly attributable to genetic influences. The results provide strong support that genetic variance in interest self-reports reflect substance rather than artifact.
      Content Type Journal Article
      Category Original Article
      Pages 1-8

      DOI 10.1027/1015-5759/a000222

      Authors
      Annika Nelling, Department of Psychology, Bielefeld University, Germany
      Christian Kandler, Department of Psychology, Bielefeld University, Germany
      Rainer Riemann, Department of Psychology, Bielefeld University, Germany
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Fri, 17 Oct 2014 17:56:24 GMT
       
  • Hungarian Validation of the Penn State Worry Questionnaire (PSWQ)
    • Abstract: The Hungarian version of the Penn State Worry Questionnaire (PSWQ) was validated in two studies, using five different samples. Study 1 tested the factor structure and internal consistency of the PSWQ in two undergraduate student samples, comparing the psychometric properties of the paper-pencil and the online versions of the scale. Study 2 assessed construct validity in two undergraduate student samples and in a sample of patients diagnosed with Generalized Anxiety Disorder (GAD) and matched control participants. Our results suggest that the Hungarian PSWQ demonstrates good psychometric properties. We found no difference between the online and the paper-pencil versions of the scale. A factor structure with one general worry factor and two method factors representing wording effects showed the best fit to the data.
      Content Type Journal Article
      Category Original Article
      Pages 1-7

      DOI 10.1027/1015-5759/a000221

      Authors
      Péter Pajkossy, Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary
      Péter Simor, Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary
      István Szendi, Department of Psychiatry, University of Szeged, Hungary
      Mihály Racsmány, Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Fri, 17 Oct 2014 17:56:24 GMT
       
  • Extending the Assessment of Complex Problem Solving to Finite State
           Automata
    • Abstract: Recent advancements in the assessment of Complex Problem Solving (CPS) build on the use of homogeneous tasks that enable the reliable estimation of CPS skills. The range of problems featured in established instruments such as MicroDYN is consequently limited to a specific subset of homogeneous complex problems. This restriction is problematic when looking at domain-specific examples of complex problems, which feature characteristics absent from current assessment instruments (e.g., threshold states). We propose to utilize the formal framework of Finite State Automata (FSA) to extend the range of problems included in CPS assessment. An approach based on FSA, called MicroFIN, is presented, translated into specific tasks, and empirically investigated. We conducted an empirical study (N = 576), (1) inspecting the psychometric features of MicroFIN, (2) relating it to MicroDYN, and (3) investigating the relations to a measure of reasoning (i.e., CogAT). MicroFIN (1) exhibited adequate measurement characteristics and multitrait-multimethod models indicated (2) the convergence of latent dimensions measured with MicroDYN. Relations to reasoning (3) were moderate and comparable to the ones previously found for MicroDYN. Empirical results and corresponding explanations are discussed. More importantly, MicroFIN highlights the feasibility of expanding CPS assessment to a larger spectrum of complex problems.
      Content Type Journal Article
      Category Original Article
      Pages 1-14

      DOI 10.1027/1015-5759/a000224

      Authors
      Jonas C. Neubert, University of Luxembourg, Luxembourg
      André Kretzschmar, University of Luxembourg, Luxembourg
      Sascha Wüstenberg, University of Luxembourg, Luxembourg
      Samuel Greiff, University of Luxembourg, Luxembourg
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Fri, 17 Oct 2014 17:56:23 GMT
       
  • Impact of Verb Tense on Response to the Four-Dimensional Symptom
           Questionnaire (4DSQ)
    • Abstract: The verb tense of a questionnaire hypothetically might influence the way people respond to its items. We examined the effects of the verb tense on the response to the Four-Dimensional Symptom Questionnaire (4DSQ) in a population-based sample (N = 4,959; present tense N = 605; past tense N = 4,354). We determined whether the verb tense impacted the overall response rate, the scale structure, differential item functioning, reliability, proportions of respondents scoring above a range of cut-offs, and mean scale scores. The verb tense did not influence response rate, scale structure, item functioning, and reliability. The present tense increased the number of respondents scoring above very low cut-offs for distress and somatization. The effect on mean scale scores was limited and of little clinical importance.
      Content Type Journal Article
      Category Original Article
      Pages 1-9

      DOI 10.1027/1015-5759/a000226

      Authors
      Berend Terluin, VU University Medical Center, Amsterdam, The Netherlands
      Miquelle A. G. Marchand, CentERdata Institute for Data Collection and Research, Tilburg University, The Netherlands
      Henrica C. W. de Vet, Department of Epidemiology and Biostatistics, EMGO Institute for Health and Care Research, VU University Medical Center, Amsterdam, The Netherlands
      Evelien P. M. Brouwers, Scientific Center for Care and Welfare (Tranzo), Tilburg University, The Netherlands
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Fri, 17 Oct 2014 17:56:21 GMT
       
  • Exploring Outcome and Validity of the GAF in Psychiatric Inpatient Care
    • Abstract: A small number of studies have investigated predictive factors in relation to the Global Assessment of Functioning (GAF) scale. This study aimed to explore the influence of clinical and socio-demographic factors in a psychiatric inpatient setting in relation to treatment outcome measured by the GAF. The studied psychiatric inpatient sample consisted of 816 episodes of care, with GAF ratings made at both admission and discharge. Multiple linear regressions were performed to analyze what variables predicted GAF scores at admission and at discharge. Significant predictors of GAF scores at admission were age, schizophrenia, other psychotic disorders, and no registered diagnosis. GAF scores at admission, patients’ diagnoses, and ward affiliation were able to significantly predict GAF at discharge. Specialized wards did not necessarily deliver the best treatment results in spite of their diagnostic specialization. This study provides support to the construct validity of the GAF when used as a measure of outcome.
      Content Type Journal Article
      Category Original Article
      Pages 1-7

      DOI 10.1027/1015-5759/a000225

      Authors
      Ove Sonesson, Department of Psychology, University of Gothenburg, Sweden
      Hans Arvidsson, Department of Psychology, University of Gothenburg, Sweden
      Tomas Tjus, Department of Psychology, University of Gothenburg, Sweden
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Fri, 17 Oct 2014 17:56:21 GMT
       
  • The Digit Span Backwards Task
    • Abstract: The “digit span backwards” (DSB) is the most commonly used test in clinical neuropsychology to assess working memory capacity. Yet, it remains unclear how the task is solved cognitively. The present study was conducted to examine the use of visual and verbal cognitive strategies in the DSB. Further, the relationship between the DSB and a complex span task, based on the Simultaneous Storage and Processing task (Oberauer et al., 2003), was investigated. Visualizers performed better than verbalizers in the dual task condition (rPB = .23) only when the relevant digits were presented optically. Performance in the DSB correlated only weakly with the complex span task in all conditions (all τ ≤ .21). The results indicate that the processing modality is determined by the preference for a cognitive strategy rather than the presentation modality and suggest that the DSB measures different working aspects than commonly used experimental working memory tasks.
      Content Type Journal Article
      Category Original Article
      Pages 1-7

      DOI 10.1027/1015-5759/a000223

      Authors
      Sven Hilbert, Ludwig-Maximilians-University, Munich, Germany
      Tristan T. Nakagawa, Pompeu Fabra University, Barcelona, Spain
      Patricia Puci, University of Graz, Austria
      Alexandra Zech, Ludwig-Maximilians-University, Munich, Germany
      Markus Bühner, Ludwig-Maximilians-University, Munich, Germany
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Fri, 17 Oct 2014 17:56:20 GMT
       
  • Item Response Model Investigation of the (German) Interpersonal Reactivity
           Index Empathy Questionnaire
    • Abstract: The interpersonal reactivity index (IRI) is a widely used personality questionnaire for measuring empathy. We investigated the psychometric properties of the German version using the partial credit model. If this model fits the data, the raw-scores are fair measures of the latent construct. Only in such a case, further analyses based on the raw-scores are accurate and valid. The results showed model fit only for the subscale empathic concern. The subscales perspective taking and fantasy consisted of two theoretically explainable sub-dimensions. For the subscale personal distress, no model fit could be achieved. Our study provides important information on the psychometric qualities of the IRI that has been repeatedly used to assess, for example, group differences. It demonstrates that these analyses were not warranted by the psychometric quality of the questionnaire. Our results provide direct suggestions (e.g., theoretically explainable sub-dimensions) for further developments of the IRI to overcome this limitation.
      Content Type Journal Article
      Category Original Article
      Pages 1-11

      DOI 10.1027/1015-5759/a000227

      Authors
      Ingrid Koller, Department of Basic Psychological Research and Research Methods, University of Vienna, Austria
      Claus Lamm, Department of Basic Psychological Research and Research Methods, University of Vienna, Austria
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Fri, 17 Oct 2014 17:56:14 GMT
       
  • Factorial Invariance of the DASS-21 Among Adolescents in Four Countries
    • Abstract: The use of scales to measure constructs in populations or contexts other than that in which they were established is highly controversial. Despite this, the use of scales without reference to “local” psychometric data is still widespread. In this study we examined the factor structure of the short 21-item form of the Depression, Anxiety, and Stress Scales (DASS-21), when it was applied to adolescent samples recruited from high schools in Australia (N = 371), Chile (N = 448), China (N = 558), and Malaysia (N = 388). Multigroup confirmatory factor analyses revealed that the purported three-factor structure of the DASS-21 was supported in each location with structural invariance across locations. While convergent and divergent validity studies are required to support this finding, the DASS-21 appears to be suitable for use with adolescents in these locations.
      Content Type Journal Article
      Category Original Article
      Pages 1-5

      DOI 10.1027/1015-5759/a000218

      Authors
      David Mellor, Deakin University, Melbourne, Australia
      Eugenia V. Vinet, University de La Frontera, Temuco, Chile
      Xiaoyan Xu, Sichuan Normal University, Chengdu, PR China
      Norul Hidayah Bt Mamat, UCSI University, Kuala Lumpur, Malaysia
      Ben Richardson, Deakin University, Melbourne, Australia
      Francisca Román, University de La Frontera, Temuco, Chile
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Thu, 24 Jul 2014 13:18:52 GMT
       
  • Measuring Decision-Making Regret Among French Populations
    • Abstract: Most studies examining decision-making processes are conducted in English. As a result, the majority of scales that are used to measure relevant constructs are unavailable in other languages. The Regret Scale (Schwartz, Ward, Monterosso, Lyubomirsky, White, & Lehman, 2002) consists of five items that assess an individual’s tendency to experience regret. The purpose of this study was to translate and validate this scale into French. Psychometric properties of the newly created Échelle de Regret were verified with a sample of native French-speaking participants. The properties of the translated scale were then compared to those of the original scale derived from a sample of native English-speaking participants. Results of measurement invariance analyses indicate that the measure functions similarly across both linguistic groups. Thus, the Échelle de Regret can be used with confidence to assess regret proneness in French-speaking populations.
      Content Type Journal Article
      Category Original Article
      Pages 1-7

      DOI 10.1027/1015-5759/a000219

      Authors
      Silvia Bonaccio, University of Ottawa, Ontario, Canada
      Annie J. Girard, University of Ottawa, Ontario, Canada
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Thu, 24 Jul 2014 13:18:50 GMT
       
  • Alternative Models of the Outcome Questionnaire-45
    • Abstract: The Outcome Questionnaire-45 (OQ) reliably quantifies level of psychological functioning and change during treatment. The three subscales, however, are not well validated. Could alternative scales, based on personality dimensions or other groupings of psychological problems better explain patterns of response? In Study 1, the intended structure and four alternative models were compared using EFA and CFA in random thirds of a community clinic intake sample (N = 1,822). Oblique and bi-level models were compared. Preferred models were tested for stability in samples from later time points. In Study 2, the models were compared in a nonclinical sample (N = 589). Most bi-level models provided adequate fit per standards previously established for the Outcome Questionnaire-45. A seven-factor model of psychological problems provided better fit than any yet reported for this inventory.
      Content Type Journal Article
      Category Original Article
      Pages 1-11

      DOI 10.1027/1015-5759/a000216

      Authors
      Amber Gayle Thalmayer, Department of Psychology, University of Oregon, Eugene, OR, USA
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Thu, 24 Jul 2014 13:18:49 GMT
       
  • Psychometric Properties of a Revised Version of the Ten Item Personality
           Inventory
    • Abstract: Gosling, Rentfrow, and Swann (2003) developed the Ten-Item Personality Inventory (TIPI) to meet the need of very short measures of the Big Five for time-limited contexts or large survey questionnaires. In this paper we show the inadequacy of the Italian version downloadable from Gosling’s website and we report the results of four studies in which the psychometric properties of a revised version (I-TIPI-R) were investigated in student and general population samples. This new version showed adequate factor structure, test-retest reliability, self-observer agreement and convergent and discriminant validity with the Big Five Inventory (BFI). Moreover, I-TIPI-R and BFI scores did not differ in their correlations with measures of affect, self-esteem, optimism, emotion regulation, and social desirability. Overall, the results suggest that the I-TIPI-R can be considered a valid and reliable alternative to the BFI for the assessment of basic personality traits when very short measures are needed.
      Content Type Journal Article
      Category Original Article
      Pages 1-11

      DOI 10.1027/1015-5759/a000215

      Authors
      Carlo Chiorri, Department of Educational Sciences, Psychology Unit, University of Genova, Italy
      Fabrizio Bracco, Department of Educational Sciences, Psychology Unit, University of Genova, Italy
      Tommaso Piccinno, Department of Educational Sciences, Psychology Unit, University of Genova, Italy
      Cinzia Modafferi, Department of Educational Sciences, Psychology Unit, University of Genova, Italy
      Valeria Battini, Department of Educational Sciences, Psychology Unit, University of Genova, Italy
      Journal European Journal of Psychological Assessment
      Online ISSN 2151-2426
      Print ISSN 1015-5759
      PubDate: Thu, 24 Jul 2014 13:18:46 GMT
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2014