A design protocol for research impact evaluation: Social simulation of the Mekong region Alex Smajgl and John Ward 1 CSIRO Ecosystem Sciences, Townsville, Australia 2 CSIRO Ecosystem Sciences, Brisbane, Australia Alex.smajgl@csiro.au Abstract. Social simulation methodology is now more often implemented in participatory processes to facilitate learning among stakeholders. Such research is increasingly scrutinised to evaluate their respective impacts. Though, traditional evaluation techniques struggle due to the array of goals and methods in participatory social simulation. An even bigger problem is that participatory research is not happening in a vacuum and distilling and attributing relevant contributions is a challenge. This paper provides a design protocol for an agent- based social simulation process in the Mekong region and aims to contribute to the emerging discussion on designing and testing evaluation methods in participatory processes. 1 Introduction The outcomes of participatory research and participatory modelling in particular are increasingly scrutinised to assess their influence on decision making processes [1, 2]. Evaluations are essential to assess the effectiveness of a specific participatory technique and to compare the relative effectiveness of methodological variants. However, participatory processes can only be established if a positive attitude towards learning exists among stakeholders. In most cases of participatory modelling, the modelling exercise is one of several potentially interacting factors [1] increasing the difficulty to assign the specific contributions of a particular participatory methodological element. Deliberative participation does not exist in isolation and the lack of a referencing ‘parallel universe’ that allows a formal comparison of with and without participatory modelling is the essential problem requiring an a priori design for monitoring and evaluating research impacts. Non-participatory research is generally characterised by less stakeholder interactions allowing for a more controlled monitoring. In a meta study Boaz et al [3] reviewed 156 research publications finding examples of 17 categories of applied data gathering methods. In order of ranking, ex-post tracing (101 cases) was found to be the most commonly applied methods to elicit data, followed by semi-structured interviews (57), case study analysis (56), documentary analysis (45), publication-related analysis (37), and surveys (30). Research impacts and data interpretation were evaluated according to 14 different types of frameworks. The most common framework relied on economic metrics. Kristjanson & Thornton [4] focused on studies undertaken by the International