Model Checking Using Tabled Rewriting * (for IJCAR2004 Doctoral Programme) Zhiyao Liang Advisor: Rakesh M. Verma Computer Science Department, The University of Houston, 501 PGH, Houston, Texas, 77204 – 3010, USA {zliang,rmverma}@cs.uh.edu Abstract. LRR [3] is a rewriting system developed at the Computer Science Department of University of Houston. LRR has two subsystems: Smaran (for tabled rewriting), and TGR (for untabled rewriting). It can utilize the history of computation to eliminate the redundant work in the process of reducing terms to their normalized forms. However the practicality of using LRR as a framework for implementing model checking has not been experimented before. We have implemented LTL and CTL model checking algorithms using LRR. The current result of this research shows that LRR can provide a convenient programming framework, and the model checker has already in some aspects achieved the efficiency comparable to those leading model checkers such as SPIN. The model checker also has the potential to be improved significantly. 1 Introduction [Model Checking] Model checking [1] is a verification technique to verify whether a system has a property expressed as a temporal formula. Since many systems can be described as finite states models, model checking techniques can be applied in many areas. Model checking techniques are relatively new when compared with traditional formal verification techniques, such as theorem proving. Compared to other verification techniques, model checking has several advantages. For example, it is completely automatic, and special expertise is not required when model checkers are used. However, model checking techniques also need to handle several challenges such as the state explosion problem. In order to explore different model checking algorithms, it is a practical issue for programmers to find some convenient programming framework. [Normalization Systems] Rewriting systems [5, 8] can provide an elegant framework for symbolic computation, theorem proving, equational reasoning and equational logic programming. These applications have the similar goal to find the normal forms of one or more terms. So to obtain efficient normalization algorithms is crucial in all these applications. The congruence closure based normalization algorithm (CCNA) [7] stores the history of its computations in a compact data * Research partially supported by NSF grant CCF 036475