85 th Annual Meeting of the Association for Information Science & Technology | Oct. 29 – Nov. 1, 2022 | Pittsburgh, PA. Author(s) retain copyright, but ASIS&T receives an exclusive publication license. ASIS&T Annual Meeting 2022 500 Short Papers How to Assess Student Learning in Information Science: Exploratory Evidence from Large College Courses Sung, SeoYoon Cornell University, USA | seoyoon.sung@cornell.edu Alon, Lilach Cornell University, USA | la367@cornell.edu Cho, Ji Yong Cornell University, USA | jc3374@cornell.edu Kizilcec, Rene Cornell University, USA | kizilcec@cornell.edu ABSTRACT Assessments in higher education can help instructors understand their students’ incoming knowledge and learning gains. Constructing and validating assessments is especially challenging in emergent, fast-growing interdisciplinary STEM fields such as information science. Unlike more traditional STEM fields like physics and mathematics, information science builds on cross-disciplinary connections with multiple pools of domain knowledge. This research investigates how to construct and use assessments to effectively capture knowledge and skills in information science. Our study was conducted in five large information science courses at a U.S. research university on data analytics, web development, visualization, technology design, and natural language processing. We worked with domain experts to develop assessment items at three levels of knowledge: declarative, applied, and transferred (Anderson et al., 2002). The assessments were administered in a pre-post design over two semesters with 1,202 students, with an evidence-based revision of the assessments between the semesters. Our initial findings suggest that some knowledge levels (applied and transferred) may be more suitable for assessing student learning in information science courses. The findings have implications for assessment in emergent interdisciplinary education and inform our plans to develop constructive assessment methods for information science education. KEYWORDS Assessment of student learning, Instrument design, Knowledge assessments, Interdisciplinary STEM education INTRODUCTION For college courses in Science, Technology, Engineering, and Mathematics (STEM) disciplines, researchers have developed and validated assessments to evaluate student learning (Lasry, Rosenfield, Dedic, Dahan, & Reshef, 2011; Tew & Guzdial, 2010). Standard knowledge assessments are typically given out before and after a learning intervention in a pre-post design (Gao, Li, Shen, & Sun, 2020) to measure students’ prior knowledge and their learning gains from a learning activity or entire class (Stoen, Mcdaniel, Frey, Mairin Hynes, & Cahill, 2020). Well- crafted assessments are perceived as essential for the growth of disciplines (Parker, Guzdial, & Engleman, 2016). This is particularly important for emerging, fast-growing STEM disciplines such as information science. Assessing student knowledge across diverse courses within a discipline helps gauge incoming knowledge in a subject area (Tew & Guzdial, 2010) and diagnose if an educational program achieves its curricular goals (Gao et al., 2020; Goldman et al., 2010). Knowledge assessments are also useful for instructors to identify student gaps in knowledge which can give rise to academic achievement gaps between sociodemographic groups (Martinková et al., 2017; Wright et al., 2016). Through evaluation of carefully designed assessments scalable across courses in a given field, instructors and institutions can provide targeted instruction and resources to fill these gaps. Assessing students’ incoming knowledge is particularly important in STEM education where achievement gaps based on students’ race, gender, and social status have been observed (Theobald et al., 2020). Devising an effective and valid instrument presents several challenges in the construction and its subsequent implementation. A major challenge in constructing assessments is to create a valid measure of knowledge that accurately assesses what it intends to measure (Parker et al., 2016; Stoen et al., 2020). Test makers or instructors first need to decide on the core concepts covered in a topic; and even if they agree on the concepts and their scope, they need to consider how the measurements will address diverse levels and dimensions of learning. For example, the assessment should cover a range of cognitive levels of learning, from basic knowledge (e.g., remembering) to more complex skills (e.g., evaluating, creating) (AERA, 2014; Anderson et al., 2001). Knowledge assessments that evaluate different levels of learning can help to move student learning toward a deeper understanding of a subject area (Wright et al., 2016). Once the core concepts are selected, another challenge is the time-intensive process of collecting initial assessment data for validation and reliability checking (Savinainen & Scott, 2002). A major challenge in implementing assessments in college courses is to administer them in a timely and efficient manner to allow instructors to quickly get feedback. 23739231, 2022, 1, Downloaded from https://asistdl.onlinelibrary.wiley.com/doi/10.1002/pra2.659 by Cornell University, Wiley Online Library on [21/10/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License