Measuring the quality of governmental websites in a controlled versus an online
setting with the ‘Website Evaluation Questionnaire’
Sanne Elling
a,
⁎, Leo Lentz
a
, Menno de Jong
b
, Huub van den Bergh
a
a
Utrecht Institute of Linguistics (UiL-OTS), Utrecht University, Trans 10, 3512 JK Utrecht, The Netherlands
b
University of Twente, Faculty of Behavioral Sciences, Department of Technical and Professional Communication, P.O. Box 217, 7500 AE Enschede, The Netherlands
abstract article info
Available online 11 May 2012
Keywords:
Governmental websites
Usability
Questionnaires
Website quality
Multidimensionality
The quality of governmental websites is often measured with questionnaires that ask users for their opinions on
various aspects of the website. This article presents the Website Evaluation Questionnaire (WEQ), which was
specifically designed for the evaluation of governmental websites. The multidimensional structure of the WEQ
was tested in a controlled laboratory setting and in an online real-life setting. In two studies we analyzed the
underlying factor structure, the stability and reliability of this structure, and the sensitivity of the WEQ to quality
differences between websites. The WEQ proved to be a valid and reliable instrument with seven clearly distinct
dimensions. In the online setting higher correlations were found between the seven dimensions than in the
laboratory setting, and the WEQ was less sensitive to differences between websites. Two possible explanations
for this result are the divergent activities of online users on the website and the less attentive way in which
these users filled out the questionnaire. We advise to relate online survey evaluations more strongly to the actual
behavior of website users, for example, by including server log data in the analysis.
© 2012 Elsevier Inc. All rights reserved.
1. Introduction
The need to evaluate the quality of governmental websites is widely
acknowledged (Bertot & Jaeger, 2008; Loukis, Xenakis, & Charalabidis,
2010; Van Deursen & Van Dijk, 2009; Van Dijk, Pieterson, Van
Deursen, & Ebbers, 2007; Verdegem & Verleye, 2009; Welle Donker-
Kuijer, De Jong, & Lentz, 2010). Many different evaluation methods
may be used, varying from specific e-government quality models (e.g.,
Loukis et al., 2010; Magoutas, Halaris, & Mentzas, 2007) to more generic
usability methods originating from fields such as human–computer
interaction and document design. These more generic methods can
be divided into expert-focused and user-focused methods (Schriver,
1989). Expert-focused methods, such as scenario evaluation (De Jong
& Lentz, 2006) and heuristic evaluation (Welle Donker-Kuijer et al.,
2010), rely on the quality judgments of communication or subject-
matter experts. User-focused methods try to collect relevant data
among (potential) users of the website. Examples of user-focused
approaches are think-aloud usability testing (Elling, Lentz, & de Jong,
2011; Van den Haak, De Jong, & Schellens, 2007, 2009), user page
reviews (Elling, Lentz, & de Jong, 2012), and user surveys (Ozok,
2008). In the Handbook of Human–Computer Interaction the survey is
considered to be one of the most common and effective user-focused
evaluation methods in human–computer interaction contexts (Ozok,
2008). Indeed, many governmental organizations use surveys to collect
feedback from their users and in this way assess the quality of their
websites. Three possible functions of a survey evaluation are providing
an indication and diagnosis of problems on the website, benchmarking
between websites, and providing post-test ratings after an evaluation
procedure. A survey is an efficient evaluation method, as it can be
used for gathering web users' opinions in a cheap, fast, and easy way.
This, however, does not mean that survey evaluation of websites is
unproblematic. The quality of surveys on the Internet varies widely
(Couper, 2000; Couper & Miller, 2008). Many questionnaires seem to
miss a solid statistical basis and a justification of the choice of quality
dimensions and questions (Hornbæk, 2006). In this paper we present
the Website Evaluation Questionnaire (WEQ). This questionnaire can
be used for the evaluation of governmental and other informational
websites. We investigated the validity and the reliability of the WEQ
in two studies: the first in a controlled laboratory setting, and the
second in a real-life online setting. Before we discuss the research
questions and the design and results of the two studies, we will first
give an overview of issues related to measuring website quality and
discuss five questionnaires on website evaluation.
1.1. Laboratory and online settings
Surveys for evaluating the quality of websites can be administered in
several different situations and formats. Traditionally, survey questions
were answered face-to-face or with paper-and pencil based surveys,
which needed to be physically distributed, filled out, returned, and
Government Information Quarterly 29 (2012) 383–393
⁎ Corresponding author.
E-mail addresses: s.elling@uu.nl (S. Elling), l.r.lentz@uu.nl (L. Lentz),
m.d.t.dejong@utwente.nl (M. de Jong), h.vandenbergh@uu.nl (H. van den Bergh).
0740-624X/$ – see front matter © 2012 Elsevier Inc. All rights reserved.
doi:10.1016/j.giq.2011.11.004
Contents lists available at SciVerse ScienceDirect
Government Information Quarterly
journal homepage: www.elsevier.com/locate/govinf