MobiTest: A Cross-Platform Tool for Testing Mobile Applications Ian Bayley, Derek Flood, Rachel Harrison, Clare Martin Oxford Brookes University, [ibayley, derek.flood, rachel.harrison, cemartin]@brookes.ac.uk Abstract— Testing is an essential part of the software development lifecycle. However, it can cost a lot of time and money to perform. For mobile applications, this problem is further exacerbated by the need to develop apps in a short time-span and for multiple platforms. This paper proposes MobiTest, a cross-platform automated testing tool for mobile applications, which uses a domain-specific language for mobile interfaces. With it, developers can define a single suite of tests that can then be run for the same application on multiple platforms simultaneously, with considerable savings in time and money. Keywords – Mobile Application; Testing; MobiTest. I. INTRODUCTION The increasing prevalence of mobile applications (hereafter, apps) continues as the use of mobile phones becomes ubiquitous. By the end of 2010, there were an estimated 5.3 billion mobile subscriptions worldwide. In developed countries there are on average 116 subscriptions for every 100 inhabitants [1]. The applications that facilitate this, known as apps, are typically developed in a relatively short time span and on low budgets, often because the unit price of the app is very small or zero. This appears to greatly diminish the usability of many of the apps that are sold to users. This is unfortunate because a recent survey [2] has identified usability as being one of the most important factors when selecting a mobile app. The annual cost of an inadequate infrastructure for testing in the US is estimated to range from $22.2 billion to $59.5 billion [3]. This cost is partly borne by users in the form of strategies to avoid and mitigate the consequences of errors. The remainder is absorbed by the software developers themselves, who have to compensate for inadequate tools and methods. The absorbed cost is even higher when one takes into account the damage that low software quality can bring to the reputation of the producer. The problems noted above are further exacerbated by the need to target multiple platforms at once. In particular, a test suite for one platform must be rewritten for any other platform for which it is required. This problem has been addressed in the desktop domain through the use of the USer Interface eXtensible Markup Language (USIXML) [12], which allows developers to create a user interface using a common language that can then be translated to any platform. This paper proposes a multi-platform testing tool that takes a description of the tests to be performed on an app and generates a test suite for every platform on which the app is to be tested. Consequently, the tests will only need to be specified once. They are described in a simple language, specialised to the domain of mobile devices. Here, we concentrate on GUI testing; but, the ideas expressed here could be extended to other forms of testing at a later date. The rest of this paper is structured as follows. Section II details the related work of this research. Section III outlines our research objectives. Section IV provides an overview of the MobiTest tool. Section V highlights some of the challenges for implementation. In Section VI, the plan for progression is detailed and Section VII concludes this paper. II. RELATED WORK A. Software Testing In The Mythical Man Month, Brooks [4] says that he assigns half of his development time for testing. This includes both component testing (of individual elements of the system) and system testing (of the complete system). His advice highlights the importance of testing; since, if it is not done adequately, the results can be very serious or even (in safety critical systems) fatal. The waterfall model [5], one of the first software development methodologies, proposed that the testing phase should happen after the implementation phase has been completed. In contrast, Beck [6] proposes that the two phases be more tightly coupled, advocating the use of Test Driven Development (TDD). TDD involves the writing the tests before writing the code, then executing the tests, and then fixing the code if the test has failed. This enables the developer to know exactly where the failing code is (as code is written in small increments). It also forces the developer to think continually about the design of the system. The collection of tests thereby accumulated can be run automatically whenever retesting is required. George and Williams [7] found that TDD produced software that passed 18% more black box tests than software built using the waterfall model. However, this higher percentage comes at the cost of development time, which is longer by 16%. Whichever approach is adopted, the use of automation reduces the time taken for testing. The alternative of manual testing is not only time-consuming, but also error prone. 619 Copyright (c) IARIA, 2012. ISBN: 978-1-61208-230-1 ICSEA 2012 : The Seventh International Conference on Software Engineering Advances