Category Archives: Human-Computer Interaction (HCI)

Prototyping and Analysing Ubiquitous Computing Environments using Multiple Layers

If ubiquitous computing (ubicomp) is to enhance physical environments then early and accurate assessment of alternative solutions will be necessary to avoid costly deployment of systems that fail to meet requirements. In the context of APEX, a project ending this month, we have developed a prototyping framework that combines a 3D Application Server with a behavior modeling tool. The contribution of this framework is that it allows exhaustive analysis of the behaviour models that drive the prototype while at the same time enabling immersive exploration of a virtual environment simulating the proposed system. The development of prototypes is supported through three layers: a simulation layer (using OpenSimulator); a modelling layer (using CPN Tools) and a physical layer (using external devices and real users). The APEX framework allows movement between these layers to analyze different features, from user experience to user behaviour. The multi layer approach makes it possible to express user behavior in the modelling layer, provides a way to reduce the number of real users needed by adding simulated avatars, and supports user testing of hybrids of virtual and real components as well as exhaustive analysis. This paper demonstrates the approach by means of an example, placing particular emphasis on the simulation of virtual environments, low cost prototyping and the formal analysis capabilities.

Link

Analysing interactive devices based on information resource constraints

Analysing interactive devices based on information resource constraints

Analysis of the usability of an interactive system requires both an understanding of how the system is to be used and a means of assessing the system against that understanding. Such analytic assessments are particularly important in safety-critical systems as latent vulnerabilities may exist which have negative consequences only in certain circumstances. Many existing approaches to assessment use tasks or scenarios to provide explicit representation of their understanding of use. These normative user behaviours have the advantage that they clarify assumptions about how the system will be used but have the disadvantage that they may exclude many plausible deviations from these norms. Assessments of how a design fails to support these user behaviours can be a matter of judgement based on individual experience rather than evidence. We present a systematic formal method for analysing interactive systems that is based on constraints rather than prescribed behaviour. These constraints capture precise assumptions about what information resources are used to perform action. These resources may either reside in the system itself or be external to the system. The approach is applied to two different medical device designs, comparing two infusion pumps currently in common use in hospitals. Comparison of the two devices is based on these resource assumptions to assess consistency of interaction within the design of each device.

Link

Reusing models and properties in the analysis of similar interactive devices

Reusing models and properties in the analysis of similar interactive devices

Variations between user interface designs are often relatively subtle and do not always become evident through even relatively thorough user testing. Notwithstanding their subtlety, these differences may be important to the safety or usability of a device. To address this, we are using model checking techniques for checking systematically a set of template properties that are designed to explore important interface characteristics. We have applied the approach to two devices designed to support similar tasks in a clinical setting (two infusion pumps). The analysis of the two infusion pumps provides solid evidence of areas of concern where redesign would reduce complexity and increase the reliability of the two designs.

Folding the user into the analysis of system resilience

The systematic analysis of interactive devices to explore potential problems that might arise through their use provides an important element in the design of new systems or the analysis of existing systems. The use of behavioral models, focusing on the system and supported by automated reasoning, to analyze human-computer interaction has been the subject of previous research [1, 2, 3, 4] but how best to fold the user into the analysis of system resilience remains an open problem. We are currently working on a technique that tackles this issue by narrowing the model checking analysis of interactive systems to cognitively plausible behaviours (those behaviours that are likely to be carried out by users because the device or the user’s training or some other contextual factor leads to it).

Continue reading

Newly funded research: Test case generation and execution for misuses

Model-based testing methods automate the generation of test cases from a model of the system under test. However, in the particular case of interactive systems, the fact that oracles typically describe the normative operation of a system presents a problem. Being normative, the oracles do not capture the common mistakes that users might make, or alternatives to the expected normative usage. However, in the case of interactive systems, the quality of the systems is also linked to how the system reacts to user error.

In the context of the PBGT (Pattern-Based GUI Testing) project, recently funded by FCT, we will focus on user errors (indeed, more generally, on unexpected user behaviours), and how they can be integrated into the model-based testing approach being developed in the project. To that end, we will explore the generation of test case mutations. Typical test cases express the expected “correct” user behaviour defined for each GUI pattern. The mutated test cases will express typical user errors, in order to test the GUI against erroneous user behaviour.

To achieve its stated goal, the task will develop an algorithm to carry out changes (mutations) to the test cases. These mutated test cases will capture the effect of the different types of errors on the interface. This will be done by building on previous work by members of the project as described in [1].

[1] A. Barbosa, A. Paiva and J.C. Campos.  Test case generation from mutated task models. In ACM Symposium on Engineering Interactive Computing Systems (EICS 2011), pages 175-184. ACM.