Edson Marchetti da Silva Comparing the use of full text search between a conventional IR System and a DBMS Abstract Although database management systems (DBMS) and information retrieval systems (IRS) are typically used for problem solving in distinct domains, in the last decade they have been given internal functions allowing them to perform full text indexing and search by relevance from the best-match IR model. However, when searching for similar studies in the Capes Portal, with terms in both Portuguese and English, no related work was found in any of the scientific research databases. Within this context, this article aims to analyze the characteristics of implementation and use of full text indexing in a DBMS by comparing it with an IRS. The DBMS chosen to perform the experiments was Postgres 9.4 while the IRS selected was the Terrier IR Platform V. 3.6, developed and maintained by the School of Computing Science - University of Glasgow. The corpus used is composed of 1,260 scientific articles in PDF, which were first transformed into plain text using the Apache Foundation’s Tika application. This was done so that both tools worked with the same textual content. The objective was to evaluate comparatively the advantages and disadvantages of using these platforms which serve to help search for information contained in collections of textual documents. As a result, it was demonstrated that in the DBMS context, the use of keyword search in unstructured textual attributes can be used in an integrated way with other structured attributes, having the advantages of simplicity of use and efficiency in response time. This strategy can thus be better applied in information systems that work on such hybrid content. 1. Introduction With the increasing proliferation of documents in digital format, it has become increasingly difficult to organize them by subject matter in order to facilitate a subsequent search. Borges (2009) outlines the evolution of indexing from when it was a manual process until the way it is performed automatically today. The author describes the main challenges in dealing with this issue. This difficulty arises from the intrinsic characteristic of documents, usually structured as cursive text and stored in binary files of the type doc, pdf, etc. In this context, textual contents are characterized by not having a predefined structure enabling them to be semantically represented so as to allow an exact search. The alternative is to process these documents by transforming them from binary to text format, and then to use word processing techniques to break them up into normalized terms so that they can be indexed in an inverted list. This structure enables a term-based search to be performed in order to find documents containing given terms among those in the collection. Such a search process is based on the application of several techniques, which enable a search to be performed that returns a list of documents ordered by relevance considering the best match of the search descriptors with those of the document collection. However, this does not mean that all returned documents are relevant to the user's search. In order to mitigate this