If ubiquitous computing (ubicomp) is to enhance physical environments then early and accurate assessment of alternative solutions will be necessary to avoid costly deployment of systems that fail to meet requirements. In the context of APEX, a project ending this month, we have developed a prototyping framework that combines a 3D Application Server with a behavior modeling tool. The contribution of this framework is that it allows exhaustive analysis of the behaviour models that drive the prototype while at the same time enabling immersive exploration of a virtual environment simulating the proposed system. The development of prototypes is supported through three layers: a simulation layer (using OpenSimulator); a modelling layer (using CPN Tools) and a physical layer (using external devices and real users). The APEX framework allows movement between these layers to analyze different features, from user experience to user behaviour. The multi layer approach makes it possible to express user behavior in the modelling layer, provides a way to reduce the number of real users needed by adding simulated avatars, and supports user testing of hybrids of virtual and real components as well as exhaustive analysis. This paper demonstrates the approach by means of an example, placing particular emphasis on the simulation of virtual environments, low cost prototyping and the formal analysis capabilities.
Category Archives: Human-Computer Interaction (HCI)Link Link
The systematic analysis of interactive devices to explore potential problems that might arise through their use provides an important element in the design of new systems or the analysis of existing systems. The use of behavioral models, focusing on the system and supported by automated reasoning, to analyze human-computer interaction has been the subject of previous research [1, 2, 3, 4] but how best to fold the user into the analysis of system resilience remains an open problem. We are currently working on a technique that tackles this issue by narrowing the model checking analysis of interactive systems to cognitively plausible behaviours (those behaviours that are likely to be carried out by users because the device or the user’s training or some other contextual factor leads to it).
Model-based testing methods automate the generation of test cases from a model of the system under test. However, in the particular case of interactive systems, the fact that oracles typically describe the normative operation of a system presents a problem. Being normative, the oracles do not capture the common mistakes that users might make, or alternatives to the expected normative usage. However, in the case of interactive systems, the quality of the systems is also linked to how the system reacts to user error.
In the context of the PBGT (Pattern-Based GUI Testing) project, recently funded by FCT, we will focus on user errors (indeed, more generally, on unexpected user behaviours), and how they can be integrated into the model-based testing approach being developed in the project. To that end, we will explore the generation of test case mutations. Typical test cases express the expected “correct” user behaviour defined for each GUI pattern. The mutated test cases will express typical user errors, in order to test the GUI against erroneous user behaviour.
To achieve its stated goal, the task will develop an algorithm to carry out changes (mutations) to the test cases. These mutated test cases will capture the effect of the different types of errors on the interface. This will be done by building on previous work by members of the project as described in .
 A. Barbosa, A. Paiva and J.C. Campos. Test case generation from mutated task models. In ACM Symposium on Engineering Interactive Computing Systems (EICS 2011), pages 175-184. ACM.