for Journals by Title or ISSN
for Articles by Keywords
help
  Subjects -> EDUCATION (Total: 1675 journals)
    - ADULT EDUCATION (24 journals)
    - COLLEGE AND ALUMNI (9 journals)
    - E-LEARNING (21 journals)
    - EDUCATION (1395 journals)
    - HIGHER EDUCATION (113 journals)
    - INTERNATIONAL EDUCATION PROGRAMS (3 journals)
    - ONLINE EDUCATION (27 journals)
    - SCHOOL ORGANIZATION (12 journals)
    - SPECIAL EDUCATION AND REHABILITATION (34 journals)
    - TEACHING METHODS AND CURRICULUM (37 journals)

EDUCATION (1395 journals)                  1 2 3 4 5 6 7 | Last

Showing 1 - 200 of 857 Journals sorted alphabetically
@tic. revista d'innovació educativa     Open Access  
Abant İzzet Baysal Üniversitesi Eğitim Fakültesi Dergisi     Open Access  
About Campus     Hybrid Journal   (Followers: 6)
Academic Medicine     Full-text available via subscription   (Followers: 58)
Academic Psychiatry     Full-text available via subscription   (Followers: 22)
Academic Questions     Hybrid Journal   (Followers: 7)
Academy of Educational Leadership Journal     Full-text available via subscription   (Followers: 52)
Academy of Management Learning and Education     Full-text available via subscription   (Followers: 47)
Accounting & Finance     Hybrid Journal   (Followers: 41)
Accounting Education: An International Journal     Hybrid Journal   (Followers: 13)
ACM Transactions on Computing Education (TOCE)     Hybrid Journal   (Followers: 3)
Across the Disciplines     Open Access   (Followers: 7)
Acta Didactica Norge     Open Access  
Acta Scientiarum. Education     Open Access  
Acta Technologica Dubnicae     Open Access  
Action in Teacher Education     Hybrid Journal   (Followers: 54)
Action Learning: Research and Practice     Hybrid Journal   (Followers: 38)
Action Research     Hybrid Journal   (Followers: 38)
Active Learning in Higher Education     Hybrid Journal   (Followers: 229)
Actualidades Pedagógicas     Open Access  
Administration & Society     Hybrid Journal   (Followers: 11)
Administrative Science Quarterly     Full-text available via subscription   (Followers: 135)
Adult Education Quarterly     Hybrid Journal   (Followers: 134)
Advanced Education     Open Access   (Followers: 4)
Advances in Health Sciences Education     Hybrid Journal   (Followers: 23)
Advances in High Energy Physics     Open Access   (Followers: 21)
Advances in School Mental Health Promotion     Partially Free   (Followers: 9)
AERA Open     Open Access   (Followers: 2)
Africa Education Review     Partially Free   (Followers: 24)
African Journal of Chemical Education     Open Access   (Followers: 2)
African Journal of Educational Studies in Mathematics and Sciences     Full-text available via subscription   (Followers: 5)
African Journal of Health Professions Education     Open Access   (Followers: 4)
African Journal of Research in Mathematics, Science and Technology Education     Full-text available via subscription   (Followers: 8)
Agora     Full-text available via subscription   (Followers: 3)
AGORA Magazine     Open Access  
Ahmad Dahlan Journal of English Studies     Open Access  
AIDS Education and Prevention     Full-text available via subscription   (Followers: 7)
Akadémiai Értesítö     Full-text available via subscription  
AKSIOMA Journal of Mathematics Education     Open Access   (Followers: 1)
Al Ibtida : Jurnal Pendidikan Guru MI     Open Access  
Alexandria : Revista de Educação em Ciência e Tecnologia     Open Access  
Alsic     Open Access   (Followers: 18)
Alteridad     Open Access  
Amasya Universitesi Egitim Fakültesi Dergisi     Open Access  
American Annals of the Deaf     Full-text available via subscription   (Followers: 11)
American Biology Teacher     Full-text available via subscription   (Followers: 12)
American Educational Research Journal     Hybrid Journal   (Followers: 131)
American Journal of Business Education     Open Access   (Followers: 10)
American Journal of Distance Education     Hybrid Journal   (Followers: 28)
American Journal of Education     Full-text available via subscription   (Followers: 153)
American Journal of Educational Research     Open Access   (Followers: 53)
American Journal of Health Education     Hybrid Journal   (Followers: 25)
American Journal of Physics     Full-text available via subscription   (Followers: 55)
ANALES de la Universidad Central del Ecuador     Open Access   (Followers: 1)
Annali dell'Universita di Ferrara     Hybrid Journal  
Annals of Dyslexia     Hybrid Journal   (Followers: 9)
Annals of Modern Education     Full-text available via subscription   (Followers: 3)
Annual Review of Economics     Full-text available via subscription   (Followers: 29)
Apertura. Revista de innovación educativa‏     Open Access   (Followers: 2)
Applied Environmental Education & Communication     Hybrid Journal   (Followers: 13)
Applied Measurement in Education     Hybrid Journal   (Followers: 9)
Art Design & Communication in Higher Education     Hybrid Journal   (Followers: 20)
Arts and Humanities in Higher Education     Hybrid Journal   (Followers: 29)
Arts Education Policy Review     Hybrid Journal   (Followers: 4)
ASHE Higher Education Reports     Hybrid Journal   (Followers: 14)
Asia Pacific Education Review     Hybrid Journal   (Followers: 9)
Asia Pacific Journal of Education     Hybrid Journal   (Followers: 17)
Asia-Pacific Education Researcher     Hybrid Journal   (Followers: 11)
Asia-Pacific Journal of Health, Sport and Physical Education     Hybrid Journal   (Followers: 9)
Asia-Pacific Journal of Teacher Education     Hybrid Journal   (Followers: 18)
Asian Association of Open Universities Journal     Open Access  
Asian Education and Development Studies     Hybrid Journal   (Followers: 5)
Asian Journal of English Language Teaching     Full-text available via subscription   (Followers: 11)
Asian Journal of Legal Education     Full-text available via subscription   (Followers: 6)
ASp     Open Access   (Followers: 1)
Assessing Writing     Hybrid Journal   (Followers: 10)
Assessment & Evaluation in Higher Education     Hybrid Journal   (Followers: 117)
Assessment for Effective Intervention     Hybrid Journal   (Followers: 14)
Assessment in Education: Principles, Policy & Practice     Hybrid Journal   (Followers: 35)
Assessment Update     Hybrid Journal   (Followers: 5)
AStA Wirtschafts- und Sozialstatistisches Archiv     Hybrid Journal   (Followers: 5)
At-Ta'dib Jurnal Kependidikan Islam     Open Access  
At-Tajdid : Jurnal Ilmu Tarbiyah     Open Access   (Followers: 2)
At-Turats     Open Access  
Athenea Digital     Open Access  
Aula Abierta     Open Access   (Followers: 1)
Australasian Journal of Educational Technology     Open Access   (Followers: 11)
Australasian Journal of Gifted Education     Full-text available via subscription   (Followers: 4)
Australasian Marketing Journal (AMJ)     Hybrid Journal   (Followers: 8)
Australian Art Education     Full-text available via subscription   (Followers: 6)
Australian Educational and Developmental Psychologist, The     Full-text available via subscription   (Followers: 6)
Australian Educational Computing     Open Access  
Australian Educational Researcher     Hybrid Journal   (Followers: 18)
Australian Journal of Adult Learning     Full-text available via subscription   (Followers: 12)
Australian Journal of Career Development     Hybrid Journal   (Followers: 2)
Australian Journal of Dyslexia and Learning Difficulties     Full-text available via subscription   (Followers: 8)
Australian Journal of Education     Hybrid Journal   (Followers: 28)
Australian Journal of Learning Difficulties     Hybrid Journal   (Followers: 4)
Australian Journal of Music Education     Full-text available via subscription   (Followers: 3)
Australian Journal of Public Administration     Hybrid Journal   (Followers: 375)
Australian Journal of Teacher Education     Open Access   (Followers: 21)
Australian Mathematics Teacher, The     Full-text available via subscription   (Followers: 7)
Australian Primary Mathematics Classroom     Full-text available via subscription   (Followers: 2)
Australian Screen Education Online     Full-text available via subscription   (Followers: 2)
Australian TAFE Teacher     Full-text available via subscription   (Followers: 2)
Australian Universities' Review, The     Full-text available via subscription   (Followers: 3)
Autism     Hybrid Journal   (Followers: 176)
Avaliação : Revista da Avaliação da Educação Superior (Campinas)     Open Access  
Azalea: Journal of Korean Literature & Culture     Full-text available via subscription   (Followers: 4)
Balkan Region Conference on Engineering and Business Education     Open Access   (Followers: 1)
BELIA : Early Childhood Education Papers     Open Access   (Followers: 4)
BELT - Brazilian English Language Teaching Journal     Open Access   (Followers: 5)
Berkeley Review of Education     Open Access   (Followers: 4)
Biblioteka i Edukacja     Open Access   (Followers: 4)
Bildung und Erziehung     Hybrid Journal   (Followers: 2)
Biosaintifika : Journal of Biology & Biology Education     Open Access   (Followers: 6)
BMC Medical Education     Open Access   (Followers: 40)
BMJ Simulation & Technology Enhanced Learning     Full-text available via subscription   (Followers: 7)
BoEM - Boletim online de Educação Matemática     Open Access  
Boletim Cearense de Educação e História da Matemática     Open Access  
Boletim de Educação Matemática     Open Access  
British Educational Research Journal     Hybrid Journal   (Followers: 155)
British Journal of Educational Studies     Hybrid Journal   (Followers: 129)
British Journal of Educational Technology     Hybrid Journal   (Followers: 120)
British Journal of Religious Education     Hybrid Journal   (Followers: 8)
British Journal of Sociology of Education     Hybrid Journal   (Followers: 45)
British Journal of Special Education     Hybrid Journal   (Followers: 36)
British Journal of Visual Impairment     Hybrid Journal   (Followers: 10)
Brookings Trade Forum     Full-text available via subscription   (Followers: 3)
Business, Management and Education     Open Access   (Followers: 17)
Caderno Brasileiro de Ensino de Física     Open Access  
Caderno Intersabares     Open Access  
Cadernos CEDES     Open Access   (Followers: 1)
Cadernos de Educação, Tecnologia e Sociedade     Open Access  
Cadernos de Pesquisa     Open Access  
Cadernos de Pesquisa     Open Access   (Followers: 2)
Cadernos de Pesquisa em Educação     Open Access  
Cadmo     Full-text available via subscription  
Cahiers de la recherche sur l'éducation et les savoirs     Open Access   (Followers: 4)
Calidad en la educación     Open Access   (Followers: 1)
Cambridge Journal of Education     Hybrid Journal   (Followers: 97)
Campus Legal Advisor     Hybrid Journal   (Followers: 2)
Campus Security Report     Hybrid Journal  
Canadian and International Education     Open Access   (Followers: 8)
Canadian Journal for New Scholars in Education/ Revue canadienne des jeunes chercheures et chercheurs en éducation     Open Access   (Followers: 7)
Canadian Journal for the Scholarship of Teaching and Learning     Open Access   (Followers: 13)
Canadian Journal of Education : Revue canadienne de l'éducation     Open Access   (Followers: 5)
Canadian Journal of Higher Education     Open Access   (Followers: 22)
Canadian Journal of Learning and Technology / La revue canadienne de l’apprentissage et de la technologie     Open Access   (Followers: 12)
Canadian Journal of School Psychology     Hybrid Journal   (Followers: 9)
Canadian Journal of Science, Mathematics and Technology Education     Hybrid Journal   (Followers: 18)
Career Development and Transition for Exceptional Individuals     Hybrid Journal   (Followers: 7)
Catalejos. Revista sobre lectura, formación de lectores y literatura para niños     Open Access  
Catharsis : Journal of Arts Education     Open Access  
CELE Exchange, Centre for Effective Learning Environments     Full-text available via subscription   (Followers: 1)
Cendekia : Jurnal Kependidikan dan Kemasyarakatan     Open Access  
Change: The Magazine of Higher Learning     Hybrid Journal   (Followers: 14)
Changing English: Studies in Culture and Education     Hybrid Journal   (Followers: 6)
Charrette     Open Access  
Chemical Engineering Education     Full-text available via subscription  
Chemistry Education Research and Practice     Free   (Followers: 5)
Chemistry in Education     Open Access   (Followers: 8)
Chi'e : Journal of Japanese Learning and Teaching     Open Access   (Followers: 1)
Child Language Teaching and Therapy     Hybrid Journal   (Followers: 25)
Child Psychiatry & Human Development     Hybrid Journal   (Followers: 9)
Childhood Education     Hybrid Journal   (Followers: 15)
Children's Literature in Education     Hybrid Journal   (Followers: 8)
Chinese Education & Society     Full-text available via subscription   (Followers: 2)
Christian Higher Education     Hybrid Journal   (Followers: 2)
Christian Perspectives in Education     Open Access   (Followers: 6)
Ciência & Educação (Bauru)     Open Access  
Ciência & Saúde Coletiva     Open Access   (Followers: 2)
Ciencia en Desarrollo     Open Access  
Ciencias Sociales y Educación     Open Access   (Followers: 2)
Citizenship, Social and Economics Education     Full-text available via subscription   (Followers: 5)
Classroom Discourse     Hybrid Journal   (Followers: 8)
Clinical Child and Family Psychology Review     Hybrid Journal   (Followers: 7)
Clio y Asociados     Open Access  
CME     Hybrid Journal   (Followers: 1)
Coaching: An International Journal of Theory, Research and Practice     Hybrid Journal   (Followers: 9)
Cogent Education     Open Access   (Followers: 1)
College Athletics and The Law     Hybrid Journal   (Followers: 1)
College Teaching     Hybrid Journal   (Followers: 12)
Colóquio Internacional de Educação e Seminário de Estratégias e Ações Multidisciplinares     Open Access  
Communication Disorders Quarterly     Hybrid Journal   (Followers: 14)
Communication Education     Hybrid Journal   (Followers: 19)
Communication Methods and Measures     Hybrid Journal   (Followers: 11)
Community College Journal of Research and Practice     Hybrid Journal   (Followers: 8)
Community College Review     Hybrid Journal   (Followers: 7)
Community Development     Hybrid Journal   (Followers: 16)
Community Literacy Journal     Partially Free   (Followers: 2)
Comparative Education     Hybrid Journal   (Followers: 27)
Comparative Education Review     Full-text available via subscription   (Followers: 32)
Comparative Professional Pedagogy     Open Access   (Followers: 2)
Compare: A journal of comparative education     Hybrid Journal   (Followers: 19)
Computer Applications in Engineering Education     Hybrid Journal   (Followers: 6)
Computer Science Education     Hybrid Journal   (Followers: 12)
Computers & Education     Hybrid Journal   (Followers: 123)
Computers in the Schools     Hybrid Journal   (Followers: 7)
Conhecimento & Diversidade     Open Access  

        1 2 3 4 5 6 7 | Last

Journal Cover Assessing Writing
  [SJR: 0.962]   [H-I: 20]   [10 followers]  Follow
    
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Print) 1075-2935
   Published by Elsevier Homepage  [3030 journals]
  • The TOEFL iBT writing: Korean students’ perceptions of the TOEFL iBT
           writing test
    • Authors: Eun-Young Julia Kim
      Pages: 1 - 11
      Abstract: Publication date: July 2017
      Source:Assessing Writing, Volume 33
      Author(s): Eun-Young Julia Kim
      The TOEFL is one of the most widely recognized language proficiency tests developed to measure international students’ level of readiness for degree study. Whereas there exist a number of correlational studies conducted by various affiliates of ETS based on large-scale quantitative data, there is a dearth of studies that explore test-takers’ perceptions and experiences concerning the TOEFL iBT. Writing skills have paramount importance for academic success, and high-stakes tests such as the TOEFL have a tendency to influence test-takers’ perceptions on what defines good academic writing. To date, no research has specifically focused on test-takers’ perceptions on the writing section of the TOEFL iBT. To fill this gap, this study explores Korean students’ perceptions of effective strategies for preparing for the TOEFL iBT writing test, challenges they face in the test-taking and test-preparation processes, and implications such findings have for various stakeholders, by analyzing online forum data. Findings indicate that the scores for the writing section of the TOEFL iBT, albeit helpful for the initial benchmarking tool, may conceal more than it reveals about Korean students’ academic writing ability. The study suggests that the format, questions, and the scoring of the TOEFL iBT writing test be critically examined from test-takers’ perspectives.

      PubDate: 2017-03-05T12:58:57Z
      DOI: 10.1016/j.asw.2017.02.001
      Issue No: Vol. 33 (2017)
       
  • Assessing peer and instructor response to writing: A corpus analysis from
           an expert survey
    • Authors: Ian G. Anson; Chris M. Anson
      Pages: 12 - 24
      Abstract: Publication date: July 2017
      Source:Assessing Writing, Volume 33
      Author(s): Ian G. Anson, Chris M. Anson
      Over the past 30 years, considerable scholarship has critically examined the nature of instructor response on written assignments in the context of higher education (see Straub, 2006). However, as Haswell (2008) has noted, less is currently known about the nature of peer response, especially as it compares with instructor response. In this study, we critically examine some of the properties of instructor and peer response to student writing. Using the results of an expert survey that provided a lexically-based index of high-quality response, we evaluate a corpus of nearly 50,000 peer responses produced at a four-year public university. Combined with the results of this survey, a large-scale automated content analysis shows first that instructors have adopted some of the field's lexical estimation of high-quality response, and second that student peer response reflects the early acquisition of this lexical estimation, although at further remove from their instructors. The results suggest promising directions for the parallel improvement of both instructor and peer response.

      PubDate: 2017-03-18T08:19:07Z
      DOI: 10.1016/j.asw.2017.03.001
      Issue No: Vol. 33 (2017)
       
  • Understanding university students’ peer feedback practices in EFL
           writing: Insights from a case study
    • Authors: Shulin Yu; Guangwei Hu
      Pages: 25 - 35
      Abstract: Publication date: July 2017
      Source:Assessing Writing, Volume 33
      Author(s): Shulin Yu, Guangwei Hu
      While research on peer feedback in the L2 writing classroom has proliferated over the past three decades, only limited attention has been paid to how students respond to their peers’ writing in specific contexts and why they respond in the ways they do. As a result, much remains to be known about how individual differences and contextual influences shape L2 students’ peer feedback practices. To bridge the research gap, this case study examines two Chinese EFL university students’ peer feedback practices and the factors influencing their feedback practices. Analyses of multiple sources of data including interviews, video recordings of peer feedback sessions, stimulated recalls, and texts reveal that the students took markedly different approaches when responding to their peers’ writing. The findings also indicate that their peer feedback practices were situated in their own distinct sociocultural context and mediated by a myriad of factors including beliefs and values, motives and goals, secondary school learning and feedback experience, teacher feedback practices, feedback training, feedback group dynamics, as well as learning and assessment culture.

      PubDate: 2017-04-08T21:03:50Z
      DOI: 10.1016/j.asw.2017.03.004
      Issue No: Vol. 33 (2017)
       
  • To make a long story short: A rubric for assessing graduate students’
           academic and popular science writing skills
    • Authors: Tzipora Rakedzon; Ayelet Baram-Tsabari
      Pages: 28 - 42
      Abstract: Publication date: April 2017
      Source:Assessing Writing, Volume 32
      Author(s): Tzipora Rakedzon, Ayelet Baram-Tsabari
      Graduate students are future scientists, and as such, being able to communicate science is imperative for their integration into the scientific community. This is primarily achieved through scientific papers, mostly published in English; however, interactions outside of academia are also beneficial for future scientists. Therefore, academic writing courses are prevalent and popular science communication courses are on the rise. Nevertheless, no rubrics exist for assessing students' writing in academic and science communication courses. This article describes the development and testing of a rubric for assessing advanced L2 STEM graduate students’ writing in academic (abstract) and popular science writing (press release). The rubric was developed as part of a longstanding academic writing course, but was modified to include a module on science communication with the lay public. Analysis of student needs and the literature inspired a pre-pilot that assessed 16 descriptors on 60 student works. A subsequent, adjusted pilot version on 30 students resulted in adaptations to fit each genre and course goals. In the third round, a modified, final rubric tested on 177 graduate students was created that can be used for both assessment and comparison of the genres. This rubric can assess scientific genres at the graduate level and can be adapted for other genres and levels.

      PubDate: 2017-01-07T04:12:57Z
      DOI: 10.1016/j.asw.2016.12.004
      Issue No: Vol. 32 (2017)
       
  • Checking assumed proficiency: Comparing L1 and L2 performance on a
           university entrance test
    • Authors: Bart Deygers; Kris Van den Branden; Elke Peters
      Pages: 43 - 56
      Abstract: Publication date: April 2017
      Source:Assessing Writing, Volume 32
      Author(s): Bart Deygers, Kris Van den Branden, Elke Peters
      This study compares the results of three groups of participants on the writing component of a centralised L2 university entrance test at the B2 level in Flanders, Belgium. The study investigates whether all Flemish candidates have a B2-level in Dutch upon university entrance, and whether L1 test takers outperform L2 candidates who learned Dutch at home or in Flanders. The results show that, even though the Flemish group outperformed both groups of L2 candidates, not all Flemish candidates reached the B2 level. Additionally, the study compares the results of two groups of L2 users on the same test and shows that candidates who studied Dutch in a Dutch-speaking context do not necessarily outscore candidates who did not. The primary methods of analysis include non-parametric regression and Multi-Faceted Rasch. The results are interpreted in terms of Hulstijn’s conceptualisation of Higher Language Competence, and the study abroad literature. Implications for the university entrance policy are discussed at the end of the paper.

      PubDate: 2017-01-07T04:12:57Z
      DOI: 10.1016/j.asw.2016.12.005
      Issue No: Vol. 32 (2017)
       
  • The effectiveness of instructor feedback for learning-oriented language
           assessment: Using an integrated reading-to-write task for English for
           academic purposes
    • Authors: Ah-Young (Alicia) Kim; Hyun Jung Kim
      Pages: 57 - 71
      Abstract: Publication date: April 2017
      Source:Assessing Writing, Volume 32
      Author(s): Ah-Young (Alicia) Kim, Hyun Jung Kim
      Learning-oriented language assessment (LOLA) can be effective in promoting learning through assessment by creating a link between the two. Although previous studies have examined the effectiveness of feedback – a major element of LOLA – in L2 writing, few have examined how LOLA could be implemented using an integrated reading-to-write task in English for academic purpose (EAP) contexts, which was the objective of this study. Participants were ten Korean TESOL graduate students taking a research methods course and their professor. During a seven-week period, each student completed a weekly integrated reading-to-write task as part of their classroom assessment – they read an academic research paper on a topic of their choice and wrote a review on it. After receiving feedback from the instructor, students revised their work and resubmitted it the following week. Students and the instructor also participated in a semi-structured interview to discuss the effectiveness of learning-oriented feedback on academic reading-to-write tasks. Learners displayed varying developmental patterns, with some students showing more improvement than others. The findings highlighted two participants’ progress in the content domain. Qualitative analysis results suggest that the students reacted differently to the instructor feedback, leading to varying degrees of writing enhancement. The results provide pedagogical implications for using integrated academic reading-to-write tasks and sustained feedback for LOLA.

      PubDate: 2017-01-22T04:23:23Z
      DOI: 10.1016/j.asw.2016.12.001
      Issue No: Vol. 32 (2017)
       
  • Textual voice elements and voice strength in EFL argumentative writing
    • Authors: Hyung-Jo Yoon
      Pages: 72 - 84
      Abstract: Publication date: April 2017
      Source:Assessing Writing, Volume 32
      Author(s): Hyung-Jo Yoon
      This study examined how the quantity and diversity of textual voice elements contribute to holistic voice strength and essay quality. For the quantification of voice elements, this study used an automated processing tool, the Authorial Voice Analyzer (AVA), which was developed based on categories from Hyland’s voice model (i.e., hedges, boosters, attitude markers, self-mentions, reader pronouns, and directives). To explore the relationship between textual voice elements and holistic voice strength, as well as between voice elements and essay quality, this study analyzed 219 argumentative essays written by L1 Greek-speaking EFL students. The results suggested positive, but weak to moderate, correlations between textual voice and holistic voice strength; a regression model with three textual voice features explained 26% of the variance in voice strength scores. The results also indicated weak correlations between textual voice and essay quality. Interestingly, the textual voice features contributing to voice strength (boosters, attitude markers, and self-mentions) were different from those contributing to essay quality (hedges). Interpreting these findings in relation to the context (timed argumentative writing in an EFL context), this study suggests implications for L2 writing assessment and pedagogy.

      PubDate: 2017-03-05T12:58:57Z
      DOI: 10.1016/j.asw.2017.02.002
      Issue No: Vol. 32 (2017)
       
  • Writing Assessment and the Revolution in Digital Texts and Technologies,
           M. Neal., Teachers College Press, New York (2010). 152 pp., ISBN
           978-0-8077-5140-4
    • Authors: Les Perelman
      Pages: 126 - 129
      Abstract: Publication date: January 2017
      Source:Assessing Writing, Volume 31
      Author(s): Les Perelman


      PubDate: 2017-01-07T04:12:57Z
      DOI: 10.1016/j.asw.2016.12.003
      Issue No: Vol. 31 (2017)
       
  • Evaluating rater accuracy and perception for integrated writing
           assessments using a mixed-methods approach
    • Authors: Jue Wang; George Engelhard Kevin Raczynski Tian Song Edward Wolfe
      Abstract: Publication date: July 2017
      Source:Assessing Writing, Volume 33
      Author(s): Jue Wang, George Engelhard, Kevin Raczynski, Tian Song, Edward W. Wolfe
      Integrated writing (IW) assessments underscore the connections between reading comprehension and writing skills. These assessments typically include rater-mediated components. Our study identified IW type essays that are difficult-to-score accurately, and then investigated reasons based on rater perceptions and judgments. Our data based on IW assessments are used as formative assessments designed to provide information on the developing literacy of students. We used a mixed- methods approach with rater accuracy defined quantitatively based on Rasch measurement theory, and a survey-based qualitative method designed to investigate rater perceptions and judgments toward student essays within the context of IW assessments. The quantitative analyses suggest that the essays and raters vary along a continuum designed to represent rating accuracy. The qualitative analyses suggest that raters had inconsistent perceptions toward certain features of essays compared to the experts, such as the amount of textual borrowing, the development of ideas, and the consistency of the focus. The implications of this study for research and practice of IW assessments are discussed.

      PubDate: 2017-04-15T21:25:34Z
       
  • Ed.Board/Aims and scope
    • Abstract: Publication date: January 2017
      Source:Assessing Writing, Volume 31


      PubDate: 2017-01-07T04:12:57Z
       
  • Thank you to reviewers, 2016
    • Abstract: Publication date: January 2017
      Source:Assessing Writing, Volume 31


      PubDate: 2017-01-07T04:12:57Z
       
  • Placement of multilingual writers: Is there a role for student voices?
    • Authors: Dana R. Ferris; Katherine Evans; Kendon Kurzer
      Pages: 1 - 11
      Abstract: Publication date: April 2017
      Source:Assessing Writing, Volume 32
      Author(s): Dana R. Ferris, Katherine Evans, Kendon Kurzer
      Directed Self-Placement (DSP) is one placement model that has been implemented in various composition programs in the U.S. but has yet to be investigated thoroughly in second language writing settings. Central to DSP is the belief that, if students are given agency to help determine their educational trajectory, they will be empowered and more motivated to succeed (Crusan, 2011; Royer & Gilles, 1998). In this study, 1067 university L2 students completed both a voluntary self-assessment survey and the locally administered placement examination. We statistically compared the students’ placement exam scores and their responses to the final question as to which level of a four-course writing program they thought would best meet their needs. We also examined a stratified random sample of 100 students’ standardized test scores to see if there was a statistical relationship between those tests, our locally designed and administered placement test, and students’ own self-placement scores. We conclude that student self-assessment might have a legitimate role in our placement process, but it probably cannot be used by itself to accurately place large numbers of multilingual students into a four-level sequence.

      PubDate: 2016-11-14T00:08:36Z
      DOI: 10.1016/j.asw.2016.10.001
      Issue No: Vol. 32 (2016)
       
  • Improvement of writing skills during college: A multi-year cross-sectional
           and longitudinal study of undergraduate writing performance
    • Authors: Daniel Oppenheimer; Franklin Zaromb; James R. Pomerantz; Jean C. Williams; Yoon Soo Park
      Pages: 12 - 27
      Abstract: Publication date: April 2017
      Source:Assessing Writing, Volume 32
      Author(s): Daniel Oppenheimer, Franklin Zaromb, James R. Pomerantz, Jean C. Williams, Yoon Soo Park
      We examined persuasive and expository writing samples collected from more than 300 college students as part of a nine-year cross-sectional and longitudinal study of undergraduate writing performance, conducted between 2000 and 2008. Using newly developed scoring rubrics, longitudinal analyses of writing scores revealed statistically significant growth in writing performance over time. These findings held for both persuasive and expository writing. Although writing performance was better among women than men, and better among students majoring in the humanities and social sciences than in natural sciences and engineering, neither women nor humanities and social science majors showed differential improvement over time from freshman to senior year. Our findings showed reliable increases in writing performance during a student’s college years, and moreover demonstrated that such longitudinal changes can be effectively measured. We call for more such outcome assessment in higher education as an essential tool to enhance student learning.

      PubDate: 2016-12-05T02:20:19Z
      DOI: 10.1016/j.asw.2016.11.001
      Issue No: Vol. 32 (2016)
       
  • Innovation in rubric use: Exploring different dimensions
    • Authors: Martin East; Sara Cushing
      Pages: 1 - 2
      Abstract: Publication date: October 2016
      Source:Assessing Writing, Volume 30
      Author(s): Martin East, Sara Cushing


      PubDate: 2016-09-21T12:04:53Z
      DOI: 10.1016/j.asw.2016.09.001
      Issue No: Vol. 30 (2016)
       
  • Farewell to holistic scoring. Part Two: Why build a house with only one
           brick?
    • Authors: Liz Hamp-Lyons
      Abstract: Publication date: July 2016
      Source:Assessing Writing, Volume 29
      Author(s): Liz Hamp-Lyons


      PubDate: 2016-07-23T12:14:37Z
      DOI: 10.1016/j.asw.2016.06.006
      Issue No: Vol. 29 (2016)
       
  • Searching for differences and discovering similarities: Why international
           and resident second-language learners’ grammatical errors cannot serve
           as a proxy for placement into writing courses
    • Authors: Kristen di Gennaro
      Pages: 1 - 14
      Abstract: Publication date: July 2016
      Source:Assessing Writing, Volume 29
      Author(s): Kristen di Gennaro
      Recent research has drawn attention to differences in the writing produced by international second-language writers and U.S. resident second-language writers, with implications for placement into college writing courses. Initially designed to complement the literature through the discovery of different types of grammatical errors in the writing produced by these two groups of learners, the current study instead challenges previous research by noting how similar the two groups’ grammatical errors are when examined in detail. Findings suggest that when groups are controlled for writing proficiency and first language, noticeable differences across the groups diminish. The study's findings call into question placement decisions for resident second-language writers, as well as the value of relying on differences in grammatical errors to distinguish the two groups. Findings can assist writing program administrators, placement test administrators, and writing instructors who need to accommodate both international and resident second-language learners.

      PubDate: 2016-06-16T18:05:25Z
      DOI: 10.1016/j.asw.2016.05.001
      Issue No: Vol. 29 (2016)
       
  • Student-generated scoring rubrics: Examining their formative value for
           improving ESL students’ writing performance
    • Authors: Anthony Becker
      Pages: 15 - 24
      Abstract: Publication date: July 2016
      Source:Assessing Writing, Volume 29
      Author(s): Anthony Becker
      Rubrics are valued for their potential to clarify teachers’ expectations, identify strengths and weaknesses, and direct students toward self-evaluation (Panadero & Jonsson, 2013). While many instructors use rubrics to assess second language writing, students are rarely involved in their development and application. This can leave students without knowledge of how rubrics are created and/or used, inhibiting their motivation and writing performance (Skillings & Ferrell, 2000). This classroom-based study investigated the effect of developing and/or applying a rubric on the writing performance of adult English as a second language learners studying at an intensive English program in the US. Using a pretest-posttest, control group design, four groups completed two summary writing tasks. One class created a rubric, another class practiced scoring with the rubric, while the third class only saw the rubric, and the fourth class served as a control group. Scores were then compared for both summary writing tasks. It was found that the holistic scores on the post-test summary writing task were significantly higher for those students who participated in the development and/or application of the scoring rubric. The findings can raise awareness for including students in the assessment process, resulting in improved writing performance.

      PubDate: 2016-06-16T18:05:25Z
      DOI: 10.1016/j.asw.2016.05.002
      Issue No: Vol. 29 (2016)
       
  • Awaiting a new wave: The status of state writing assessment in the United
           States
    • Authors: Nadia Behizadeh; Myoung Eun Pang
      Pages: 25 - 41
      Abstract: Publication date: July 2016
      Source:Assessing Writing, Volume 29
      Author(s): Nadia Behizadeh, Myoung Eun Pang
      Large-scale state assessment is in a time of flux in the United States. The Common Core State Standards have been widely adopted, resulting in most states developing or adopting new writing assessments. This article presents results from document analysis of websites across all 50 states conducted in 2015 to determine writing assessment formats and scoring practices. Drawing on the dichotomy of psychometric and sociocultural assessment approaches, three major classifications for writing assessments are used to categorize assessments: indirect psychometric, direct psychometric, and direct sociocultural, with these aligning with multiple choice tests, traditional “direct writing assessment” or on-demand essay assessment, and portfolio assessment, respectively. Findings indicated that 46 out of 50 states (92%) were primarily using on-demand essay assessment, often in conjunction with multiple choice and short answer items, and no state was utilizing portfolios for writing assessment. Regarding scoring, 98% of state writing assessment was scored externally with no involvement of the classroom teacher. Overall, there was no evidence that forms of direct sociocultural assessment were occurring at the state level. The current study offers a snapshot of recent state writing assessment in order to inform the next wave of writing assessment in the United States.

      PubDate: 2016-06-21T22:25:34Z
      DOI: 10.1016/j.asw.2016.05.003
      Issue No: Vol. 29 (2016)
       
  • Developing an individual and collective self-efficacy scale for the
           teaching of writing in high schools
    • Authors: Terry Locke; Michael Johnston
      Pages: 1 - 14
      Abstract: Publication date: April 2016
      Source:Assessing Writing, Volume 28
      Author(s): Terry Locke, Michael Johnston
      The study reported on here focuses on self-efficacy in relation to high-school teachers’ teaching of writing. 140 New Zealand teachers from four schools completed a teacher-of-writing self-efficacy scale (TWSES) based on a rhetorical model of the writing process and incorporating five hypothesized dimensions. An initial principal components analysis was undertaken on 25 individual self-efficacy items to investigate the dimensionality of the data and the extent to which it reflected the dimensions hypothesized. A two-component solution emerged, termed “pre-writing instructional strategies” (accounting for 52% of total variance) and “compositional strategy demonstration” (7% of variance). Further principal components analyses conducted on groups of items deemed to be thematically coherent, that loaded on each component, confirmed that the data set for each group, treated separately to any other items, was approximately uni-dimensional. Measurement scales were calibrated to each group of items, and served as the dependent variables for comparisons of teachers’ self-efficacy in different subjects. Statistically significant variations occurred in the resultant scale locations for teachers of English, the humanities, science and mathematics. The study findings have implications for the teaching of writing as conceptualized in the secondary school, and indicate a value in viewing disciplinary literacies in rhetorical terms.

      PubDate: 2016-01-31T12:10:38Z
      DOI: 10.1016/j.asw.2016.01.001
      Issue No: Vol. 28 (2016)
       
  • Comparing writing performance in TOEFL-iBT and academic assignments: An
           exploration of textual features
    • Authors: A. Mehdi Riazi
      Pages: 15 - 27
      Abstract: Publication date: April 2016
      Source:Assessing Writing, Volume 28
      Author(s): A. Mehdi Riazi
      This paper reports an exploratory study in which the written texts produced by postgraduate students in test and real-life academic situation are compared in terms of the linguistic and discoursal features. Data were collected from 20 international English as a second language (ESL) postgraduate students from different first language backgrounds and three general disciplines of science and engineering, arts and humanities, and business and economics. The participants were studying in postgraduate programs in five universities in New South Wales, Australia. These participants completed two writing test tasks of the TOEFL-iBT (integrated and independent tasks) and an academic assignment for one of the university courses they enrolled in. Textual features of the test and academic assignment texts were compared on 20 linguistic and discoursal features. These textual features are related to syntactic complexity (five variables), lexical sophistication (nine variables) and cohesion (six variables). Results of a series of repeated measures Analysis of Covariance (ANCOVA) indicated similarities and differences in the linguistic and discoursal features of the three writing task texts. Findings are reported and discussed and implications are made for the extrapolation inference claim in the validity argument of the Writing section of the TOEFL-iBT.

      PubDate: 2016-02-25T22:08:43Z
      DOI: 10.1016/j.asw.2016.02.001
      Issue No: Vol. 28 (2016)
       
  • Describing and interpreting graphs: The relationships between
           undergraduate writer characteristics and academic graph writing
           performance
    • Authors: Hui-Chun Yang
      Pages: 28 - 42
      Abstract: Publication date: April 2016
      Source:Assessing Writing, Volume 28
      Author(s): Hui-Chun Yang
      Although graph-based writing is common in tests of academic English due to its correspondence with the real-world academic writing, a concern, however, has been raised regarding the role of graphic prompts in writing and the proper interpretation of performances on such tests. This study investigates the relationships between writer characteristics (graph familiarity, English writing ability, and content knowledge) and performance on a graph-writing test with two tasks: GD task and GI task. The participants were 234 English as a foreign language (EFL) health science and medical major undergraduate students. Quantitative data from multiple sources were collected, including the graph familiarity questionnaire, the content knowledge test, the English writing test, and the graph-writing test. The findings from structural equation modeling analyses showed that these graph tasks elicit writers’ content knowledge and academic writing ability. Overall, graph familiarity had no significant impact on writers’ performance on either of the graph task, while content knowledge and writing ability had significant and positive effects on test performance. Content knowledge thus introduced a potential source of construct-irrelevant variance. The study has implications for the development and use of graph-based writing as a measure of academic writing.

      PubDate: 2016-03-13T23:08:00Z
      DOI: 10.1016/j.asw.2016.02.002
      Issue No: Vol. 28 (2016)
       
  • Writing assessment literacy: Surveying second language teachers’
           knowledge, beliefs, and practices
    • Authors: Deborah Crusan; Lia Plakans; Atta Gebril
      Pages: 43 - 56
      Abstract: Publication date: April 2016
      Source:Assessing Writing, Volume 28
      Author(s): Deborah Crusan, Lia Plakans, Atta Gebril
      Assessing student writing constitutes the major portion of second language writing teachers’ workloads; however, studies assessing and quantifying teachers’ writing assessment literacy (knowledge, beliefs, practices) are comparatively rare. In the present study, second language writing instructors from tertiary institutions (N=702) were surveyed. Data were collected with a 54-item survey instrument administered through SurveyMonkey®. Items were formulated to ascertain writing teachers’ backgrounds and perspectives on assessment using multiple choice, Likert-scale, and open-ended response items. Analysis focuses on four research questions: (1) How have second language writing teachers obtained assessment knowledge? (2) What do second language writing teachers believe about writing assessment? (3) What are the assessment practices of second language writing teachers? (4) What is the impact of linguistic background and teaching experience on writing assessment knowledge, beliefs, and practices? Teachers reported training in writing assessment through graduate courses, workshops, conference presentations; however, nearly 26% of teachers in this survey had little or no training. The results also showed relative effects of linguistic background and teaching experience on teachers’ writing assessment knowledge, beliefs, and practices.

      PubDate: 2016-04-17T07:53:18Z
      DOI: 10.1016/j.asw.2016.03.001
      Issue No: Vol. 28 (2016)
       
  • Farewell to Holistic Scoring?
    • Authors: Liz Hamp-Lyons
      Abstract: Publication date: January 2016
      Source:Assessing Writing, Volume 27
      Author(s): Liz Hamp-Lyons


      PubDate: 2016-01-31T12:10:38Z
      DOI: 10.1016/j.asw.2015.12.002
      Issue No: Vol. 27 (2016)
       
  • Validation of a locally created and rated writing test used for placement
           in a higher education EFL program
    • Authors: Robert C. Johnson; A. Mehdi Riazi
      Abstract: Publication date: Available online 4 October 2016
      Source:Assessing Writing
      Author(s): Robert C. Johnson, A. Mehdi Riazi
      This paper reports a study conducted to validate a locally created and rated writing test. The test was used to inform a higher education institution’s decisions regarding placement of entering students into appropriate preparatory English program courses. An amalgam of two influential models – Kane’s (1992, 1994) interpretive model and Bachman’s (2005) and Bachman and Palmer’s (2010) assessment use argument – was used to build a validation framework. A mixed methods approach incorporating a diverse array of quantitative and qualitative data from various stakeholders, including examinees, students, instructors, staff, and administrators, guided the collection and analysis of evidence informing the validation. Results established serious doubts about the writing test, not only in terms of interpreted score meaning, but also the impact of its use on various stakeholders, and on teaching and learning. The study reinforces the importance of comprehensive validation efforts, particularly by test users, for all instruments informing decisions about test-takers, including writing tests and other types of direct performance assessments. Results informed a number of suggested changes regarding the rubric and rater training, among others, thus demonstrating the potential of validation studies as ‘road maps’ for immediate opportunities to improve both testing and decisions made based on testing.

      PubDate: 2016-10-10T08:05:10Z
      DOI: 10.1016/j.asw.2016.09.002
       
  • Ed.Board/Aims and scope
    • Abstract: Publication date: October 2016
      Source:Assessing Writing, Volume 30


      PubDate: 2016-09-21T12:04:53Z
       
  • Assessing students’ digital writing: Protocols for looking closely.
           Hicks, T. (Ed.) Teachers College Press: New York, NY (2015). 146 pp.,
           ISBN: 978–8077-5669-0.
    • Authors: Lin Sophie; Teng
      Abstract: Publication date: Available online 31 August 2016
      Source:Assessing Writing
      Author(s): Lin Sophie Teng


      PubDate: 2016-09-02T12:15:27Z
       
  • Narrative and expository genre effects on students, raters, and
           performance criteria
    • Authors: Heejeong Jeong
      Abstract: Publication date: Available online 27 August 2016
      Source:Assessing Writing
      Author(s): Heejeong Jeong
      The effects of genre play an important role in the assessment of student writing. This study examines the effects of narrative and expository genres on student language proficiency, raters, and performance criteria. For this study, EFL students (n=180) from three proficiency levels (novice, intermediate, and advanced) wrote a narrative and an expository essay that were assessed by raters using four performance criteria: paragraph structure, content, form, and vocabulary. A multi-faceted Rasch measurement (MFRM) analysis showed that differences in the students’ scores were not statistically significant between genres, but showed a significant difference depending on the writing proficiency level. Novice students received significantly higher scores on narratives, while advanced students received significantly higher scores on expository essays, but there was no score difference for intermediate students. Raters showed greater variance when rating for narratives compared to expository texts. Narrative essays covered a wider range of student writing ability, while expository essays showed more centralization in writing scores. For the four performance criteria, vocabulary showed interactions with narrative and expository genres. Expository essays were given significantly higher scores for vocabulary than for narrative texts. The results of this study have implications for the use of narrative and expository genres for writing assessment.

      PubDate: 2016-08-28T12:15:21Z
      DOI: 10.1016/j.asw.2016.08.006
       
  • Development and initial argument-based validation of a scoring rubric used
           in the assessment of L2 writing electronic portfolios
    • Authors: Sheila Parveen Lallmamode; Nuraihan Mat Daud; Noor Lide Abu Kassim
      Abstract: Publication date: Available online 27 August 2016
      Source:Assessing Writing
      Author(s): Sheila Parveen Lallmamode, Nuraihan Mat Daud, Noor Lide Abu Kassim
      Although writing electronic portfolios (ePortfolios) help learners communicate digitally and provide a platform for works to be better collected and presented, challenges are present in their assessment. This paper reports the development and validation of a writing ePortfolio scoring rubric for an action research course for L2 learners. Using Bachman’s (2005) ‘Assessment Use Argument’ as a basis, two main claims to support the validity of the rubric were examined: (1) the rubric is a reliable tool; and, (2) the rubric is relevant to the construct being measured. A mixed-method approach was used in the development and validation of the eight-criteria analytic and holistic evaluation scoring rubric. Thirteen raters evaluated thirty-eight ePortfolios in the study. The analyses of raters’ ratings using a many-facet Rasch measurement approach and raters’ individual standardised open-ended interviews indicated that overall the rubric’s analytic categories functioned appropriately to assess the intended construct. However, the criterion ‘Ease of Navigation’ was found to be misfitting due to differences in raters’ evaluation of the same ePortfolios. Overall, the argument-based validation indicated that the scoring rubric is a reliable and valid instrument for the purpose of assessing L2 writing ePortfolios in the context for which it was developed.

      PubDate: 2016-08-28T12:15:21Z
      DOI: 10.1016/j.asw.2016.06.001
       
  • Exploring the relationship of organization and connection with scores in
           integrated writing assessment
    • Authors: Lia Plakans; Atta Gebril
      Abstract: Publication date: Available online 25 August 2016
      Source:Assessing Writing
      Author(s): Lia Plakans, Atta Gebril
      Traditionally, second language writing assessment has employed writing tasks that require only a single skill; however, in many academic contexts, writing requires the integration of several abilities, including reading and listening. To improve authenticity, integrated tasks are increasingly used in the research and assessment of second language writing. Scholars have proposed discourse synthesis as an underlying construct for these tasks. This study investigated performances on integrated reading-listening-writing tasks to consider how organization and connection, subprocesses in discourse synthesis, are reflected in scores. Four hundred eighty responses to an integrated writing prompt were analyzed for organizational patterns, coherence, and cohesion in relation to test scores. Raters coded essays for type and appropriateness of organization and coherence quality, while computational analysis was used to look at cohesion features. The results indicate that organization and coherence were related to writing score, with quality improving as score increased. However, the cohesion markers analyzed in this study yielded no statistical differences across the score levels.

      PubDate: 2016-08-28T12:15:21Z
      DOI: 10.1016/j.asw.2016.08.005
       
  • “I can see that”: Developing shared rubric category interpretations
           through score negotiation
    • Authors: Jonathan Trace; Valerie Meier; Gerriet Janssen
      Abstract: Publication date: Available online 25 August 2016
      Source:Assessing Writing
      Author(s): Jonathan Trace, Valerie Meier, Gerriet Janssen
      Performance assessments using raters will always contain some subjectivity, and disagreement among raters necessitates reliable methods for resolving scores. Negotiation is one effective method to guide scoring decisions and reduce raters’ tendencies to be unexpectedly severe or lenient when scoring specific rubric categories or examinees. Beyond its utility for scoring, however, negotiation is also a resource for raters to co-construct interpretations about the language constructs being measured. This study uses quantitative and qualitative methods to trace how negotiation impacts raters’ scoring decisions and examine in detail how raters develop joint interpretations of rubric category criteria. Scores from the writing section of a high stakes English language placement exam (n =60) were analyzed using ANOVA and many-faceted Rasch measurement to determine which categories were frequently assigned discrepant scores and to estimate rater severity. Discourse analysis of six audiotaped negotiation sessions was then used to examine how raters’ understanding of rubric criteria converged over time. Our results indicate that through negotiation, raters used shared terminology and justifications to clarify ambiguous constructs and work to establish shared values. The results suggest that score negotiation influences scoring inferences and also creates affordances for raters to ground those inferences in shared constructions of meaning.

      PubDate: 2016-08-28T12:15:21Z
      DOI: 10.1016/j.asw.2016.08.001
       
  • Taking stock of portfolio assessment scholarship: From research to
           practice
    • Authors: Ricky Lam
      Abstract: Publication date: Available online 21 August 2016
      Source:Assessing Writing
      Author(s): Ricky Lam
      Portfolio assessment has been extensively investigated over the past two decades. Nonetheless, its broader applications in the first and second language writing classrooms remain inadequate. This paper emphasizes that theoretical and empirical research evidence is likely to inform the classroom-based implementation of portfolio assessment. The paper first introduces the origin, definitions, rationale, applications and characteristics of portfolio assessment, and then historicizes writing portfolio assessment scholarship according to the evolving trends of portfolio assessment development in both the first and second language writing contexts. Subsequently, a method section is included concerning how the theoretical and empirical scholarship was screened, selected and categorized for review in terms of three key themes: (1) research which supports classroom applications of portfolio assessment; (2) research which inhibits classroom-based portfolio assessment practices; and (3) research that needs future investigation on how to promulgate portfolio implementation. The review is followed by three pedagogical recommendations suggesting how teachers, administrators and programme directors can better develop learning-supportive portfolio assessment practices and have maximum exposure to pertinent professional learning. It is hoped that the paper advances the portfolio assessment scholarship, predominantly with a view of using research evidence to inform classroom practices.

      PubDate: 2016-08-24T12:15:14Z
      DOI: 10.1016/j.asw.2016.08.003
       
  • Voice in timed L2 argumentative essay writing
    • Authors: Cecilia Guanfang Zhao
      Abstract: Publication date: Available online 18 August 2016
      Source:Assessing Writing
      Author(s): Cecilia Guanfang Zhao
      The concept of voice is included in various writing textbooks, learning standards, and assessment rubrics, indicating the importance of this element in writing instruction and assessment at both secondary and postsecondary levels. Researchers in second language (L2) writing, however, often debate the importance of voice in L2 writing. Due to the elusiveness of this concept, much of such debate is still at the theoretical level; few empirical studies exist that provide solid evidence to either support or refute the proposition that voice is an important concept to teach in L2 writing classrooms. To fill this gap, the present study empirically investigated the relationship between voice salience, as captured by an analytic rubric, and official TOEFL iBT argumentative essay scores in 200 timed L2 essays. Results showed that voice was a significant predictor of TOEFL essay scores, explaining about 25% of the score variances. Moreover, while each individual voice dimension was found to be strongly or moderately correlated with essay scores when examined in isolation, only the ideational dimension became a significant predictor of text quality, when the effect of other dimensions was controlled for. Implications of such results for L2 writing instruction are discussed.

      PubDate: 2016-08-24T12:15:14Z
      DOI: 10.1016/j.asw.2016.08.004
       
  • “I feel disappointed”: EFL university students’ emotional responses
           towards teacher written feedback
    • Authors: Omer Hassan Ali Mahfoodh
      Abstract: Publication date: Available online 7 August 2016
      Source:Assessing Writing
      Author(s): Omer Hassan Ali Mahfoodh
      Studies on teacher written feedback in Second Language (L2) contexts have not given adequate attention to learners’ emotional responses towards teacher written feedback. Thus, this study examined the relationship between emotional responses of EFL university students towards teacher written feedback and students’ success of revisions. Data were collected using think-aloud protocols, students’ written texts, and semi-structured interviews. To obtain students’ emotional responses towards teacher written feedback, grounded theory was employed to analyse think-aloud protocols and semi-structured interviews. Teacher written feedback was tabulated and categorised using a coding scheme which was developed based on Straub and Lunsford (1995) and Ferris (1997). Students’ success of revisions was analysed using an analytical scheme based on Conrad and Goldstein (1999). The results revealed that EFL university students’ emotional responses include acceptance of feedback, rejection of feedback, surprise, happiness, dissatisfaction, disappointment, frustration, and satisfaction. Some emotional responses could be attributed to harsh criticism, negative evaluation, and miscommunication between teachers and their students. The study also revealed that emotional responses can affect students’ understanding and utilisation of teacher written feedback.
      Graphical abstract image

      PubDate: 2016-08-08T12:14:54Z
      DOI: 10.1016/j.asw.2016.07.001
       
  • A Many-Facet Rasch analysis comparing essay rater behavior on an academic
           English reading/writing test used for two purposes
    • Authors: Sarah Goodwin
      Abstract: Publication date: Available online 4 August 2016
      Source:Assessing Writing
      Author(s): Sarah Goodwin
      Second language (L2) writing researchers have noted that various rater and scoring variables may affect ratings assigned by human raters (Cumming, 1990; Vaughan, 1991; Weigle, 1994, 1998, 2002; Cumming, Kantor, & Powers, 2001; Lumley, 2002; Barkaoui, 2010). Contrast effects (Daly & Dickson-Markman, 1982; Hales & Tokar, 1975; Hughes, Keeling, & Tuck, 1983), or how previous scores impact later ratings, may also color raters’ judgments of writing quality. However, little is known about how raters use the same rubric for different examinee groups. The present paper concerns an integrated reading and writing test of academic English used at a U.S. university for both admissions and placement purposes. Raters are trained to interpret the analytic scoring rubric similarly no matter which test type is scored. Using Many-Facet Rasch measurement (Linacre, 1989/1994), I analyzed scores over seven semesters, examining rater behavior on two test types (admissions or placement). Results indicated that, of 25 raters, five raters showed six instances of statistically significant bias on admissions or placement tests. The findings suggest that raters may be attributing scores to a wider range of writing ability levels on admissions than on placement tests. Implications for assessment, rater perceptions, and small-scale academic testing programs are discussed.

      PubDate: 2016-08-08T12:14:54Z
      DOI: 10.1016/j.asw.2016.07.004
       
  • Responding to student writing online: Tracking student interactions with
           instructor feedback in a Learning Management System
    • Authors: Angela Laflen; Michelle Smith
      Abstract: Publication date: Available online 1 August 2016
      Source:Assessing Writing
      Author(s): Angela Laflen, Michelle Smith
      Instructor response to student writing increasingly takes place within Learning Management Systems (LMSs), which often make grades visible apart from instructor feedback by default. Previous studies indicate that students generally ascribe more value to grades than to instructor feedback, while instructors believe that feedback is most important. This study investigated how students interact with an LMS interface—an instance of Sakai—to access instructor feedback on their writing. Our blind study analyzed data from 334 students in 16 courses at a medium, comprehensive private college to investigate the question: Does the rate at which students open attachments with instructor feedback differ if students can see their grades without opening the attachment? We compared two response methodologies: mode 1 made grades visible apart from feedback, and mode 2 required students to open attached feedback files to find their grades. The data for each mode was collected automatically by the LMS, retrieved, and retrospectively analyzed. The results show that making grades visible separate from feedback significantly reduced the rate at which students opened instructor feedback files and that timing also impacted students’ rate of access. These findings provide the basis for empirically informed best practices for grading and returning papers online.

      PubDate: 2016-08-03T12:14:49Z
      DOI: 10.1016/j.asw.2016.07.003
       
  • K-12 multimodal assessment and interactive audiences: An exploratory
           analysis of existing frameworks
    • Authors: Ewa McGrail; Nadia Behizadeh
      Abstract: Publication date: Available online 30 July 2016
      Source:Assessing Writing
      Author(s): Ewa McGrail, Nadia Behizadeh
      Multimodal writing today often occurs through membership in an online, participatory culture; thus, based on affordances of online compositions, the audience for student writers has shifted from imagined readers to actual, accessible readers and responders. Additionally, recent content and technology standards for students in US schools emphasize the importance of distributing multimodal compositions to wider audiences. In this article, we closely examine attention to interactive audience and collaboration in a purposive sample of kindergarten through 12th grade (K-12) assessment frameworks, as well as how these frameworks define multimodal composition. We found that multimodal composition is being defined consistently across all frameworks as composition that includes multiple ways of communicating. However, many multimodal composition examples were texts that were non-interactive composition types even though many authors acknowledged the emergence of interactive online composition types that afford the writer the ability to communicate and collaborate with an audience. In addition, the frameworks reviewed tended to focus on the final product and less often on the process or dynamic collaboration with the audience. In the discussion, implications for classroom teachers as well as considerations for researchers exploring the construct of online multimodal writing are offered.

      PubDate: 2016-08-03T12:14:49Z
      DOI: 10.1016/j.asw.2016.06.005
       
  • How students' ability levels influence the relevance and accuracy of their
           feedback to peers: A case study
    • Authors: Ivan Chong
      Abstract: Publication date: Available online 25 July 2016
      Source:Assessing Writing
      Author(s): Ivan Chong
      Traditionally, teachers play a central role in creating a learning environment that favors the implementation of peer assessment in writing. Nevertheless, students’ writing ability and how it factors into students’ provision of relevant (content-related) and accurate (language-related) written feedback is not considered. This is due to the fact that most studies about peer assessment were conducted in a tertiary setting and researchers assume university students have attained a basic level of cognitive and linguistic developments that would empower them to make judgments about their peers’ work. The present study, which was conducted in a Hong Kong secondary school, investigated this research gap by analyzing first drafts produced by a class of 16 Secondary 1 (Grade 7) students in a writing unit. The first section of the study reports students’ writing abilities in terms of content development and linguistic accuracy; findings in the subsequent section suggest that there is a strong and positive relationship between students’ writing abilities and the relevance and accuracy of their written feedback. This paper ends with two pedagogical implications for implementing peer assessment: Alignment with pre-writing instruction and the development of marking focuses based on students’ abilities.

      PubDate: 2016-07-28T12:14:43Z
      DOI: 10.1016/j.asw.2016.07.002
       
  • Are TOEFL iBT® writing test scores related to keyboard type? A survey of
           keyboard-related practices at testing centers
    • Authors: Guangming Ling
      Abstract: Publication date: Available online 18 July 2016
      Source:Assessing Writing
      Author(s): Guangming Ling
      The strength of a computer-based writing test, such as the TOEFL iBT ® Writing Test, lies in its capability to assess all examinees under the same conditions so that scores reflect the targeted writing abilities rather than differences in testing conditions, such as types of keyboards. The familiarity and proficiency examinees have with a specific type of keyboard could affect their efficiency in writing essays and introduce construct-irrelevant variance, although little research is available in the literature. To explore this, we surveyed 2214 TOEFL iBT testing centers in 134 countries on practices related to keyboard type and analyzed the centers’ responses and the TOEFL iBT scores of examinees from these centers. Results revealed that (a) most testing centers used the U.S. standard English keyboard (USKB) for the test, but a small proportion of centers used a country-specific keyboard (CSKB) after being converted to the USKB; (b) TOEFL iBT Writing scores appear to be significantly associated with the types of keyboard and overlay in only 10 countries, with trivial or small score differences associated with keyboard type. These findings suggest that the current practices related to keyboard type appear to have no or little practical effect on examinees’ TOEFL iBT Writing scores.

      PubDate: 2016-07-23T12:14:37Z
      DOI: 10.1016/j.asw.2016.04.001
       
  • Rubrics and corrective feedback in ESL writing: A longitudinal case study
           of an L2 writer
    • Authors: Estela Ene; Virginia Kosobucki
      Abstract: Publication date: Available online 15 July 2016
      Source:Assessing Writing
      Author(s): Estela Ene, Virginia Kosobucki
      In teaching and assessing L2 writing, the ideal combination of Corrective Feedback (CF) and rubric use is yet to be determined. When rubrics are used with multiple drafts and assignments, teachers may wonder if other forms of CF are still necessary or useful. This longitudinal case study follows a learner’s progress over the course of one year in order to explore the relationship between CF and rubrics as complementary parts of a repertoire of pedagogical instruments that together support students’ development as language learners and writers. The study takes place in a context where rubrics are institutionally mandated and additional CF is optional. This classroom-based, teacher-led, action-research study finds that, when institutions require the use of form-focused CF and rubrics, it is possible that they discourage teacher written comments, thus depriving the student of personalized feedback. The learner improved her accuracy after receiving both form-focused CF and rubrics, but she valued marginal and end comments more, although she received these sparingly. It appears that institutionally mandated rubrics have some limiting effects on addressing aspects of writing other than form and can leave learners unsatisfied. We recommend supplementing rubrics with individualized comments when responding to and assessing L2 writing.

      PubDate: 2016-07-23T12:14:37Z
      DOI: 10.1016/j.asw.2016.06.003
       
  • Ed.Board/Aims and scope
    • Abstract: Publication date: July 2016
      Source:Assessing Writing, Volume 29


      PubDate: 2016-07-23T12:14:37Z
       
  • “Voice” in children’s science arguments: Aligning assessment
           criteria with genre and discipline
    • Authors: Catherine L. O’Hallaron; Mary J. Schleppegrell
      Abstract: Publication date: Available online 13 July 2016
      Source:Assessing Writing
      Author(s): Catherine L. O’Hallaron, Mary J. Schleppegrell
      Rubrics commonly used in the U.S. to assess elementary students’ writing often ask raters to score for “voice.” However, voice is not a unitary construct that can be applied across genres and disciplines. In this article, we draw on functional linguistics to describe features of voice in science writing. We then review national standards, state curriculum documents, assessments, and a popular commercial writing program, revealing that teachers get little guidance in understanding disciplinary and genre differences in the ways an authorial voice can be realized. We present a case study reporting on assessment of 2nd and 4th grade students’ science arguments after instruction in voice features. Analysis of raters’ scores and evaluative comments on that writing suggest a potential mismatch between teachers’ expectations for voice in the logical arguments emphasized in standards recently adopted by a majority of U.S. states. We call for more differentiated rubrics for assessing voice to inform robust instruction that prepares students to write in different ways across genre and subject area.

      PubDate: 2016-07-15T12:07:06Z
      DOI: 10.1016/j.asw.2016.06.004
       
  • Assessment Myths: Applying Second Language Research to Classroom Teaching,
           L. Plakans, A. Gebril. University of Michigan Press (2015). 171 pp., ISBN:
           978-0-472-03581-6
    • Authors: Diane Schmitt
      Abstract: Publication date: Available online 21 June 2016
      Source:Assessing Writing
      Author(s): Diane Schmitt


      PubDate: 2016-06-21T22:25:34Z
       
  • Ed.Board/Aims and scope
    • Abstract: Publication date: April 2016
      Source:Assessing Writing, Volume 28


      PubDate: 2016-04-22T08:47:11Z
       
  • Very Like a Whale: The Assessment of Writing Programs, E.M. White, N.
           Elliot, I. Peckham. Utah State University Press, Logan (2015), ISBN:
           978-0-87421-985-2
    • Authors: David Slomp
      Abstract: Publication date: Available online 25 January 2016
      Source:Assessing Writing
      Author(s): David H. Slomp


      PubDate: 2016-01-31T12:10:38Z
       
  • Ed.Board/Aims and scope
    • Abstract: Publication date: January 2016
      Source:Assessing Writing, Volume 27


      PubDate: 2016-01-31T12:10:38Z
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 107.22.30.231
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2016