Poster: Regret Minimizing Audits Jeremiah Blocki Student Carnegie Mellon University Nicolas Christin Faculty Carnegie Mellon University Anupam Datta Faculty Carnegie Mellon University Arunesh Sinha Student Carnegie Mellon University ABSTRACT Audits complement access control and are essential for enforcing privacy and security policies in many situations. The importance of audit as an a posteriori enforcement mechanism has been recognized in the computer security literature. For example, Lampson [1] takes the position that audit logs that record relevant evidence during system execution can be used to detect violations of policy, estab- lish accountability and punish the violators. More recently, Weitzner et al. [2] also recognize the importance of audit and accountability, and the inadequacy of preventive access control mechanisms as the sole basis for privacy protection in today’s open information environment. However, unlike access control, which has been the subject of a significant body of foundational work, there is comparatively little work on the foundations of audit. Our focus is on policies that cannot be mechanically enforced in their entirety. Privacy regulations, such as the HIPAA for electronic medical record, provide one set of relevant policies of this form. For example, HIPAA allows transmission of protected health information about an in- dividual from a hospital to a law enforcement agency if the hospital believes that the death of the individual was suspicious. Such beliefs cannot, in general, be checked mechanically either at the time of transmission or in an a posteriori audit; the checking process requires human auditors to inspect evidence recorded on audit logs. In practice, organizations like hospitals use ad hoc audits in conjunction with access control mechanisms to protect patient privacy. Typically, the access control policies are quite permissive: all employees who might need patient information to perform activities related to treatment, pay- ment and operations may be granted access to patient records. These permissive policies are necessary to ensure that no legitimate access request is ever denied, as denying such requests could have adverse consequences on the quality of patient care. Unfortunately, a permissive access control regime opens up the possibility of records being inappropriately accessed and transmitted. Audit mechanisms help detect such violations of policy. This is achieved by recording accesses made by employees in an audit log that is then examined by human auditors to determine whether accesses and transmissions were appropriate and to hold individuals accountable for violating policy. Recent studies reveal that many policy violations occur in the real world as employees inappropriately access records of celebrities and family members motivated by general curiosity, financial gain and other considerations [3]. Thus, there is a pressing need to develop audit mechanisms with well understood properties that effectively detect policy violations. This work presents the first principled learning-theoretic foundation for audits of this form. Our first contribution is a game-theoretic model that captures the interaction between the defender (e.g., hospital auditors) and the adversary (e.g., hospital employees). The model takes pragmatic consider- ations into account, in particular, the periodic nature of audits, a budget that constrains the number of actions that the defender can inspect thus reflecting the imperfect nature of audit-based enforcement, and a loss function that captures the economic impact of detected and missed violations on the organization. We assume that the adversary is worst-case as is standard in other areas of computer security. We also formulate a desirable property of the audit mechanism in this model based on the concept of regret in learning theory [4]. Our second contribution is a novel audit mechanism that provably minimizes regret for the defender. The mechanism learns from experience and provides operational guidance to the human auditor about which and how many of the accesses to inspect. The regret bound is significantly better than prior results in the learning literature. Overview of Results Mirroring the periodic nature of audits in practice, we use a repeated game model [5] that proceeds in rounds. A round represents an audit cycle and, depending on the application scenario, could be a day, a week or even a quarter. Adversary model: In each round, the adversary performs a set of actions (e.g., access patient records) of which a subset violates policy. Actions are classified into types. For example, accessing celebrity records could be a different type of action from accessing non-celebrity records. The adversary capabilities are defined by parameters that impose upper bounds on the number of actions of each type that she can perform in any round. We place no additional restrictions on the adversary’s behavior. In particular, we do not assume that the adversary violates policy following a fixed probability distribution; nor do we assume that she is