Ranking search results in library information systems — considering ranking approaches adapted from web search engines Christiane Behnert & Dirk Lewandowski Hamburg University of Applied Sciences Department of Information Finkenau 35 20281 Hamburg Germany This is a preprint of an article accepted for publication in The Journal of Academic Librarianship (2015). Please cite as: Behnert, C., & Lewandowski, D. (2015). Ranking search results in library information systems — Considering ranking approaches adapted from web search engines. The Journal of Academic Librarianship, 41(6), 725–735. https://doi.org/10.1016/j.acalib.2015.07.010 Abstract For an information retrieval system to be successful, it must have the ability to rank search results. As web search engines are the most often used and — in terms of ranking functionality — the most advanced existing systems, the principles they are based on and the strategies they use can be advantageous when applied to the library context. We categorize ranking factors into six different groups: 1. text statistics, 2. popularity, 3. freshness, 4. locality and availability, 5. content properties and 6. user background. We discuss the basic concepts and assumptions these ranking factors involve and offer potential implementations in the library context. The practice recommended here is for libraries to not only apply selected ranking factors — as existing library information systems already do — but to systematically test for the ranking factors best suited to their systems. We argue for a user-centric view on ranking, because in the end, ranking should be for the benefit of the user, and user preferences may vary across different contexts. Keywords Library information systems; OPAC; Relevance ranking; Ranking factors; Search results