Authors:Amy Deschenes, Meg McMahon Pages: 2 - 22 Abstract: Objectives – To understand how many undergraduate and graduate students use generative AI as part of their academic work, how often they use it, and for what tasks they use it. We also sought to identify how trustworthy students find generative AI and how they would feel about a locally maintained generative AI tool. Finally, we explored student interest in trainings related to using generative AI in academic work. This survey will help librarians better understand the rate at which generative AI is being adopted by university students and the need for librarians to incorporate generative AI into their work. Methods – A team of three library staff members and one student intern created, executed, and analyzed a survey of 360 undergraduate and graduate students at Harvard University. The survey was distributed via email lists and at cafes and libraries throughout campus. Data were collected and analyzed using Qualtrics. Results – We found that nearly 65% of respondents have used or plan to use generative AI chatbots for academic work, even though most respondents (65%) do not find their outputs trustworthy enough for academic work. The findings show that students actively use these tools but desire guidance around effectively using them. Conclusion – This research shows students are engaging with generative AI for academic work but do not fully trust the information that it produces. Librarians must be at the forefront of understanding the significant impact this technology will have on information-seeking behaviors and research habits. To effectively support students, librarians must know how to use these tools to advise students on how to critically evaluate AI output and effectively incorporate it into their research. PubDate: 2024-06-14 DOI: 10.18438/eblip30512 Issue No:Vol. 19, No. 2 (2024)
Authors:Margaret A Hoogland, Gerald Natal, Robert Wilmott, Clare F. Keating, Daisy Caruso Pages: 23 - 50 Abstract: Objective – Beginning in Fiscal Year 2023, a university initiated a multi-year transition to an incentive-based budget model, under which the University Libraries budget would eventually be dependent upon yearly contributions from colleges. Such a change could result in the colleges having a more profound interest in library services and resources. In anticipation of any changes in thoughts and perceptions on existing University Libraries services, researchers crafted a survey for administrators, faculty, and staff focused on academic units related to the health sciences. The collected information would inform library budget decisions with the goal of optimizing support for research and educational interests. Methods – An acquisitions and collection management librarian, electronic resources librarian, two health science liaisons, and a staff member reviewed and considered distributing validated surveys to health science faculty, staff, and administrators. Ultimately, researchers concluded that a local survey would allow the University Libraries to address health science community needs and gauge use of library services. In late October 2022, the researchers obtained Institutional Review Board approval and distributed the online survey from mid-November to mid-December 2022. Results – This survey collected 112 responses from health science administrators, faculty, and staff. Many faculty and staff members had used University Libraries services for more than 16 years. By contrast, most administrators started using the library within the past six years. Cost-share agreements intrigued participants as mechanisms for maintaining existing subscriptions or paying for new databases and e-journals. Most participants supported improving immediate access to full-text articles instead of relying on interlibrary loans. Participants desired to build upon existing knowledge of Open Access publishing. Results revealed inefficiencies in how the library communicates changes in collections (e.g., journals, books) and services. Conclusion – A report of the study findings sent to library administration fulfilled the research aim to inform budget decision making. With the possibility of reduced funds under the new internal budgeting model to both academic programs and the library, the study supports consideration of internal cost-sharing agreements. Findings exposed the lack of awareness of the library’s efforts at decision making transparency, which requires exploration of alternative communication methods. Research findings also revealed awareness of Open Educational Resources and Open Access publishing as areas that deserve heightened promotional efforts from librarians. Finally, this local survey and methodology provides a template for potential use at other institutions. PubDate: 2024-06-14 DOI: 10.18438/eblip30379 Issue No:Vol. 19, No. 2 (2024)
Authors:Sarah LeMire Pages: 51 - 62 Abstract: Objective – This study was designed to explore the potential academic impact of open textbooks in writing courses. Methods – The researcher used statistical analyses of course outcomes for over 1,000 sections to examine the impact of OER usage on course GPA in three writing courses at an R1 university. Results – Study results reveal that using an OER textbook is associated with an overall increase in class GPA. Conclusion – When advocating for the use of OER in campus writing courses, librarians can point to findings that suggest improved student outcomes after a switch to OER in those courses. PubDate: 2024-06-14 DOI: 10.18438/eblip30490 Issue No:Vol. 19, No. 2 (2024)
Authors:Johnson Mulongo Masinde, Frankline Mugambi, Daniel Muthee Wambiri Pages: 63 - 73 Abstract: Objective – The aim of this study is to examine the conceptualization and pedagogical approaches being used in Kenyan universities to teach and learn information literacy to determine if they are effective in addressing the information needs of the 21st century. The findings of this study will act as a guide to educational stakeholders in the design, review, and implementation of the information literacy curriculum. The findings will also create awareness among librarians of the diverse concepts in information literacy and hopefully inform their practice when delivering information literacy instruction. Additionally, future researchers can leverage the insights garnered from this study to advance their own works, thereby contributing to the ongoing growth of knowledge in this field. Methods – This study employed descriptive research design to collect qualitative data from the webpages of seven universities that were purposively selected: three being private universities and four were public universities. The seven academic libraries had an active online presence and adequate documentation of information literacy. The data were analyzed using thematic analysis. Results – The research findings show a lack of consistency in the conceptualization of information literacy. In addition, the findings demonstrate a link between information literacy conceptualization and practice. Many of the online tutorials and information literacy documentations failed to address all the aspects of information literacy. Conclusion – In order to effectively address 21st century information needs, academic libraries should reevaluate their conceptualization of information literacy. This should be followed by a comprehensive evaluation of their information literacy instruction to ensure they cover all aspects of information literacy. It is essential for these libraries to provide information literacy instruction to students throughout their academic journey rather than just focusing on first-year students. Moreover, structured assessments of students should be implemented to gain feedback on the effectiveness of these instruction programs. PubDate: 2024-06-14 DOI: 10.18438/eblip30370 Issue No:Vol. 19, No. 2 (2024)
Authors:Anita Phul, Hélène Gorring, David Stokes Pages: 74 - 93 Abstract: Objective – This project sought to build upon a reader development tool, Many Roads to Wellbeing, developed by a health librarian in a mental health NHS Trust in Birmingham, England, by piloting reading group sessions in the main public library in the city using wellbeing-themed stories and poems. The aim was to establish whether a “wellbeing through reading” program can help reading group participants to experience key facets of wellbeing as defined by the Five Ways to Wellbeing. Methods – The program developers ran 15 monthly sessions at the Library of Birmingham. These were advertised using the Meetup social media tool to reach a wider client base than existing library users; members of the public who had self-prescribed to the group and were actively seeking wellbeing. A health librarian selected wellbeing-themed short stories and poems and facilitated read aloud sessions. The Library of Birmingham provided facilities and a member of staff to help support each session. Results – A total of 131 participants attended the 15 sessions that were hosted. There was a 95% response rate to the questionnaire survey. Of the respondents, 91% felt that sessions had helped them to engage with all of the Five Ways to Wellbeing. The three elements of Five Ways to Wellbeing that participants particularly engaged with were Connect (n=125), Take Notice (n=123), and Keep Learning (n=124). Conclusion – The reading program proved to be successful in helping participants to experience multiple dimensions of wellbeing. This project presents a new way of evaluating a bibliotherapy scheme for impact on wellbeing, as well as being an example of effective partnership working between the healthcare sector and a public library. PubDate: 2024-06-14 DOI: 10.18438/eblip30475 Issue No:Vol. 19, No. 2 (2024)
Authors:Elizabeth Sterner Pages: 94 - 108 Abstract: Objective – The purpose of this research project was to examine the state of library research guides supporting systematic reviews in the United States as well as services offered by the libraries of these academic institutions. This paper highlights the informational background, internal and external educational resources, informational and educational tools, and support services offered throughout the stages of a systematic review. Methods – The methodology centered on a content analysis review of systematic review library research guides currently available in 2023. An incognito search in Google as well as hand searching were used to identify the relevant research guides. Keywords searched included: academic library systematic review research guide. Results – The analysis of 87 systematic review library research guides published in the United States showed that they vary in terms of resources and tools shared, depth of each stage, and support services provided. Results showed higher levels of information and informational tools shared compared to internal and external education and educational tools. Findings included high coverage of the introductory, planning, guidelines and reporting standards, conducting searches, and reference management stages. Support services offered fell into three potential categories: consultation and training; acknowledgement; and collaboration and co-authorship. The most referenced systematic review software tools and resources varied from subscription-based tools (e.g., Covidence and DistillerSR) to open access tools (e.g., Rayyan and abstrackr). Conclusion – A systematic review library research guide is not the type of research guide that you can create and forget about. Librarians should consider the resources, whether educational or informational, and the depth of coverage when developing or updating systematic review research guides or support services. Maintaining a systematic review research guide and support service requires continual training and maintaining familiarity with all resources and tools linked in the research guide. PubDate: 2024-06-14 DOI: 10.18438/eblip30405 Issue No:Vol. 19, No. 2 (2024)
Authors:Abbey Lewis Pages: 127 - 129 Abstract: A Review of: Moulaison-Sandy, H. (2023). What is a person' Emerging interpretations of AI authorship and attribution. Proceedings of the Association for Information Science & Technology, 60(1), 279–290. https://doi.org/10.1002/pra2.788 Objective – To examine how and which academic libraries are responding to emerging guidelines on citing ChatGPT in the American Psychological Association (APA) style through guidance published on the libraries’ websites. Design – Analysis of search results and webpage content. Setting – Websites of academic libraries in the United States. Subjects – Library webpages addressing how ChatGPT should be cited in APA format. Methods – Google search results for academic library webpages providing guidance on citing ChatGPT in APA format were retrieved on a weekly basis using the query “chatgpt apa citation site:.edu” over a six-week period that covered the weeks before and immediately after the APA issued official guidance for citing ChatGPT. The first three pages of relevant search results were coded in MAXQDA and analyzed to determine the type of institution, using the Carnegie Classification and membership in the Association of American Universities (AAU). As this was a period during which APA style recommendations for citing ChatGPT were shifting, the accuracy of the library webpage content was also assessed and tracked across the studied time period. Main Results – During the six-week period, the number of library webpages with guidance for citing ChatGPT in APA format increased. Although doctoral universities accounted for the largest number of webpages each week, baccalaureate colleges, baccalaureate/associate’s colleges, and associates’ colleges were also well-represented in the search results. Institutions belonging to the AAU were represented by a relatively small number throughout the study. Over half of the pages made some mention of APA’s recommendations being interim or evolving, though the exact number fluctuated throughout the period. Prior to the collection period, APA had revised its initial recommendations to cite ChatGPT as a webpage or as personal communication, but 40% to 60% of library webpages continued to offer this outdated guidance. Of the library webpages, 13% to 40% provided verbatim guidance from ChatGPT responses on how it should be cited. The final two weeks of the collection period occurred after April 7, 2023, when APA had published official recommendations for citing ChatGPT. In the week following this change, none of the webpages in the first three pages of results had been updated to fully capture the new recommendations. The study analyzed the nine webpages appearing in the first page of results for the second week after APA’s official recommendations were published, showing that three linked to the APA’s blog, zero provided further explanation on how to apply the recommendations, five included outdated guidance, and three gave guidance from ChatGPT’s responses to questions on how it should be cited. Conclusion – The author sees the results of the study as reflecting three interrelated components: a new technology, gaps in librarians’ knowledge related to large language models (LLMs) and how they are currently being discussed in terms of authorship, and Google’s inability to rank the results in a way that prioritizes correct information. The substantial presence of institutions serving undergraduates leads the author to conclude that this is the population most in need of guidance for citing ChatGPT and the responsiveness on the part of the librarians shows an understanding of this need, even if the guidance itself is inaccurate. PubDate: 2024-06-14 DOI: 10.18438/eblip30514 Issue No:Vol. 19, No. 2 (2024)
Authors:Mary-Kathleen Grams Pages: 130 - 132 Abstract: A Review of: Adetayo, A. J. (2023). ChatGPT and librarians for reference consultations. Internet Reference Services Quarterly, 27(3), 131–147. https://doi.org/10.1080/10875301.2023.2203681 Objective – To investigate students’ use of ChatGPT and its potential advantages and disadvantages compared to reference librarians at a university library. Design – Survey research. Setting – A university library in Nigeria. Subjects – Students familiar with ChatGPT (n=54) who were enrolled in a library users’ education course. Methods – A survey was conducted in a sample of undergraduate students enrolled in a library users’ education course, who had previously used ChatGPT. Participants were asked questions based on six categories that reflected frequency of use, types of inquiries, frequency of reference consultations, desire to consult reference librarians despite the availability of ChatGPT, and potential advantages and disadvantages of ChatGPT compared to reference librarians. A 4-point Likert scale was used to measure the responses from often to never, strongly agree to strongly disagree, and rarely to frequently. Main Results – The sample of students who participated (n=54) were a diverse group whose age varied from below 20 (35.2%) to above 30 years (31.5%) and represented a variety of fields of study, such as engineering, business and social sciences, arts, law, sciences, basic and medical sciences. Regarding frequency of use, the author reported that 40.7% of participants occasionally used ChatGPT, and 26.1% and 16.7% used it frequently or very frequently, respectively. Of the five options that represented types of inquiries (religious, political, academic, entertainment, and work), academic and work-related inquiries were topics most often searched in ChatGPT. Participants indicated that they consulted reference librarians occasionally (40.8%), frequently (37%), or rarely (22.2%). Most students (87%) would continue to consult reference librarians despite the availability of ChatGPT. For questions that compared ChatGPT to reference librarians, four options were provided to describe potential advantages and four options were provided to describe potential disadvantages. Most students agreed or strongly agreed that ChatGPT is more user friendly (83.4%), that it includes a broad knowledge base (90.7%), is easily accessible (83.3%), and saves time by responding to questions quickly (98%) compared to reference librarians. Fewer than half of the students agreed or strongly agreed that ChatGPT’s knowledge base is not up to date (47.2%). Most agreed or strongly agreed that it cannot comprehend some questions (72.3%), that it cannot read emotions as a librarian would (74.1%), and that responses to questions may be incorrect (66.6%). The potential advantage with the strongest response score was that ChatGPT saves time by responding to questions quickly (mean 3.52). The potential disadvantage with the strongest response score was ChatGPT could not read emotions as a librarian would (mean 2.91). Conclusion – Students from an academic institution acknowledged the potential advantages and disadvantages of ChatGPT over reference librarians, yet the majority of students would continue to utilize reference librarian services. The author suggests that ChatGPT is a versatile and useful tool as a supplement rather than a replacement for knowledgeable and personable reference librarians. Based on the results of the study, the author emphasizes the importance of interpersonal skills and enhanced accessibility of reference librarians outside of typical work hours. PubDate: 2024-06-14 DOI: 10.18438/eblip30518 Issue No:Vol. 19, No. 2 (2024)
Authors:Kristy Hancock Pages: 133 - 135 Abstract: A Review of: Xiao, J., & Gao, W. (2020). Connecting the dots: reader ratings, bibliographic data, and machine-learning algorithms for monograph selection. The Serials Librarian, 78(1-4), 117-122. https://doi.org/10.1080/0361526X.2020.1707599 Objective – To illustrate how machine-learning book recommender systems can help librarians make collection development decisions. Design – Data analysis of publicly available book sales rankings and reader ratings. Setting – The internet. Subjects – 192 New York Times hardcover fiction best seller titles from 2018, and 1,367 Goodreads ratings posted in 2018. Methods – Data were collected using Application Programming Interfaces. The researchers retrieved weekly hardcover fiction best seller rankings published by the New York Times in 2018 in CSV file format. All 52 files, each containing bibliographic data for 15 hardcover fiction titles, were combined and duplicate titles removed, resulting in 192 unique best seller titles. The researchers retrieved reader ratings of the 192 best seller titles from Goodreads. The ratings were limited to those posted in 2018 by the top Goodreads reviewers. A Bayes estimator produced a list of the top ten highest rated New York Times best sellers. The researchers built the recommender system using Python and employed several content-based and collaborative filtering recommender techniques (e.g., cosine similarity, term frequency-inverse document frequency, and matrix factorization algorithms) to identify novels similar to the highest rated best sellers. Main Results – Each recommender technique generated a different list of novels. Conclusion – The main finding from this study is that recommender systems can simplify collection development for librarians and facilitate greater access to relevant library materials for users. Academic libraries can use the same recommender techniques employed in the study to identify titles similar to highly circulated monographs or frequently requested interlibrary loans. There are several limitations to using recommender systems in libraries, including privacy concerns when analyzing user behaviour data and potential biases in machine-learning algorithms. PubDate: 2024-06-14 DOI: 10.18438/eblip30521 Issue No:Vol. 19, No. 2 (2024)
Authors:Matthew Chase Pages: 136 - 138 Abstract: A Review of: Rodriguez, S., & Mune, C. (2022). Uncoding library chatbots: Deploying a new virtual reference tool at the San Jose State University Library. Reference Services Review, 50(3), 392-405. https://doi.org/10.1108/RSR-05-2022-0020 Objective – To describe the development of an artificial intelligence (AI) chatbot to support virtual reference services at an academic library. Design – Case study. Setting – A public university library in the United States. Subjects – 1,682 chatbot-user interactions. Methods – A university librarian and two graduate student interns researched and developed an AI chatbot to meet virtual reference needs. Developed using chatbot development software, Dialogflow, the chatbot was populated with questions, keywords, and other training phrases entered during user inquiries, text-based responses to inquiries, and intents (i.e., programmed mappings between user inquiries and chatbot responses). The chatbot utilized natural language processing and AI training for basic circulation and reference questions, and included interactive elements and embeddable widgets supported by Kommunicate (i.e., a bot support platform for chat widgets). The chatbot was enabled after live reference hours were over. User interactions with the chatbot were collected across 18 months since its launch. The authors used analytics from Kommunicate and Dialogflow to examine user interactions. Main Results – User interactions increased gradually since the launch of the chatbot. The chatbot logged approximately 44 monthly interactions during the spring 2021 term, which increased to approximately 137 monthly interactions during the spring 2022 term. The authors identified the most common reasons for users to engage the chatbot, using the chatbot’s triggered intents from user inquiries. These reasons included information about hours for the library building and live reference services, finding library resources (e.g., peer-reviewed articles, books), getting help from a librarian, locating databases and research guides, information about borrowing library items (e.g., laptops, books), and reporting issues with library resources. Conclusion – Libraries can successfully develop and train AI chatbots with minimal technical expertise and resources. The authors offered user experience considerations from their experience with the project, including editing library FAQs to be concise and easy to understand, testing and ensuring chatbot text and elements are accessible, and continuous maintenance of chatbot content. Kommunicate, Dialogflow, Google Analytics, and Crazy Egg (i.e., a web usage analytics tool) could not provide more in-depth user data (e.g., user clicks, scroll maps, heat maps), with plans to further explore other usage analysis software to collect the data. The authors noted that only 10% of users engaged the chatbot beyond the initial welcome prompt, requiring more research and user testing on how to facilitate user engagement. PubDate: 2024-06-14 DOI: 10.18438/eblip30523 Issue No:Vol. 19, No. 2 (2024)
Authors:David Dettman Pages: 139 - 141 Abstract: A Review of: Subaveerapandiyan, A., Sunanthini, C., & Amees, M. (2023). A study on the knowledge and perception of artificial intelligence. IFLA Journal, 49(3), 503–513. https://doi.org/10.1177/03400352231180230 Objective – To assess the knowledge, perception, and skills of library and information science (LIS) professionals related to artificial intelligence (AI). Design – 45 statements were distributed to 469 LIS professionals via Google Forms to collect primary data. 245 participants responded to the structured questionnaire. Setting – University and college libraries in Zambia. Subjects – Zambian library and information science professionals.
Methods – A descriptive approach was employed for the study. Data was gathered via a questionnaire. “The objective was to assess the statistical relationship between the knowledge, perception, and skills of LIS professionals (the independent variables) and AI (the dependent variable)” (Subaveerapandiyan et al., p. 506). The survey used a 5-point Likert scale with (1) strongly disagree being the lowest score and (5) strongly agree the highest. Means and standard deviations are included in data display tables. Thematic analysis was employed to analyze the data. SPSS was used for data analysis.
Main Results – Survey results are presented in three tables. Table 1, “Awareness of AI among LIS professionals,” contains 21 statements related to AI use in various library environments and services, including reference (finding articles and citations, content summarization, detecting misinformation), circulation of library materials, security and surveillance, character recognition and document preservation, research data management, language translation, and others. The authors note that 44.1 percent of the respondents agreed that “AI is essential for the effectiveness and efficiency of library service delivery, enabling libraries to enhance and offer dynamic services for their users” (Subaveerapandiyan et al., 2023, p. 506). Table 2, “Perception of AI among LIS professionals,” contains 10 statements. Over 85 percent of respondents either strongly agreed or agreed that AI “makes library staff lazy” while 58.1 percent either strongly agreed or agreed that AI is a “threat to librarians’ employment” (Subaveerapandiyan et al., 2023, p. 506). The authors note that the “respondents also indicated barriers to the adoption of AI in libraries, such as the lack of LIS professionals’ skills and budgetary constraints” (Subaveerapandiyan et al., 2023, p. 506). Table 3 lists 13 competencies required by library professionals in the AI era. The majority of the respondents (an average of 65 percent) were in strong agreement that “electronic communication, hardware and software, Internet applications, computing and networking, cyber security and network management, data quality control, data curation, database management … are necessary competencies required by LIS professionals for them to be proficient in AI” (Subaveerapandiyan et al., 2023, p. 506). PubDate: 2024-06-14 DOI: 10.18438/eblip30436 Issue No:Vol. 19, No. 2 (2024)
Authors:Samantha Kaplan Pages: 142 - 144 Abstract: A Review of: Wang, Y. (2022). Using machine learning and natural language processing to analyze library chat reference transcripts. Information Technology and Libraries, 41(3). https://doi.org/10.6017/ital.v41i3.14967 Objective – The study sought to develop a model to predict if library chat questions are reference or non-reference. Design – Supervised machine learning and natural language processing. Setting – College of New Jersey academic library. Subjects – 8,000 Springshare LibChat transactions collected from 2014 to 2021. Methods – The chat logs were downloaded into Excel, cleaned, and individual questions were labelled reference or non-reference by hand. Labelled data were preprocessed to remove nonmeaningful and stop words, and reformatted to lowercase. Data were then stemmed to group words with similar meaning. The feature of question length was then added and data were transformed from text to numeric for text vectorization. Data were then divided into training and testing sets. The Python packages Natural Language Toolkit (NLTK) and scikit-learn were used for analysis, building random forest and gradient boosting models which were evaluated via confusion matrix. Main Results – Both models performed very well in precision, recall and accuracy, with the random forest model having better overall results than the gradient boosting model, as well as a more efficient fit time, though slightly longer prediction time. Conclusion – High volume library chat services could benefit from utilizing machine learning to develop models that inform plugins or chat enhancements to filter chat queries quickly. PubDate: 2024-06-14 DOI: 10.18438/eblip30527 Issue No:Vol. 19, No. 2 (2024)
Authors:Andrea Miller-Nesbitt Pages: 145 - 147 Abstract: A Review of: Brzustowicz, R. (2023). From ChatGPT to CatGPT: The Implications of Artificial Intelligence on Library Cataloging. Information Technology and Libraries, 42(3). https://doi.org/10.5860/ital.v42i3.16295 Objective – To evaluate the potential of ChatGPT as a tool for improving efficiency and accuracy in cataloguing library records. Design – Observational, descriptive study. Setting – Online, using ChatGPT and the WorldCat catalogue. Subject – The Large Language Model (LLM) ChatGPT. Methods – Prompting ChatGPT to create MARC records for items in different formats and languages and comparing the ChatGPT derived records versus those obtained from the WorldCat catalogue. Main results – ChatGPT was able to generate MARC records, but the accuracy of the records was questionable, despite the authors’ claims. Conclusion – Based on the results of this study, the author concludes that using ChatGPT to streamline the process of cataloging could allow library staff to focus time and energy on other types of work. However, the results presented suggest that ChatGPT introduces significant errors in the MARC records created, thereby requiring additional time for cataloguers to correct the error-laden records. The author correctly stresses that if ChatGPT were used to assist with cataloguing, it would remain important for professionals to check the records for completion and accuracy. PubDate: 2024-06-14 DOI: 10.18438/eblip30524 Issue No:Vol. 19, No. 2 (2024)