Prototyping and Analysing Ubiquitous Computing Environments using Multiple Layers

If ubiquitous computing (ubicomp) is to enhance physical environments then early and accurate assessment of alternative solutions will be necessary to avoid costly deployment of systems that fail to meet requirements. In the context of APEX, a project ending this month, we have developed a prototyping framework that combines a 3D Application Server with a behavior modeling tool. The contribution of this framework is that it allows exhaustive analysis of the behaviour models that drive the prototype while at the same time enabling immersive exploration of a virtual environment simulating the proposed system. The development of prototypes is supported through three layers: a simulation layer (using OpenSimulator); a modelling layer (using CPN Tools) and a physical layer (using external devices and real users). The APEX framework allows movement between these layers to analyze different features, from user experience to user behaviour. The multi layer approach makes it possible to express user behavior in the modelling layer, provides a way to reduce the number of real users needed by adding simulated avatars, and supports user testing of hybrids of virtual and real components as well as exhaustive analysis. This paper demonstrates the approach by means of an example, placing particular emphasis on the simulation of virtual environments, low cost prototyping and the formal analysis capabilities.

Post-Doc Position in Distributed Data Aggregation

We are opening a Post-Doc Position in HASLab supported by a one year grant.

The grant is for a PhD holder, to join our research line in Distributed Data Aggregation and Monitoring. The successful candidate is expected to research on distributed aggregation, e.g. Flow Updating and Extrema Propagation, and explore improvements to allow for faster and more reactive adaptation to changes in the network and monitored values. Prospective candidates should consult this announcement.

HASLab has a very active research team in Large Scale Distributed Systems, working in both Systems and Theory. Our Laboratory is located in the Braga Campus of Universidade do Minho, one of the top state funded research universities in Portugal and in the top 3% of Scimago IBE ranking. We are part of the Associate Laboratory INESC TEC.  Braga is very close to the Gerês Natural Park, and has good connections to Porto (and its vibrant cultural life) and the International Airport (30 min shuttle bus to Braga). It also has a very competitive cost of living in comparison to other European cities.

This call is open from June, 16th to June, 30th, 2014.

Any questions, contact Carlos Baquero at cbm@di.uminho.pt

Reasoning About Eventual Consistency with Replicated Data Types (Alexey Gotsman – IMDEA Software Institute)

HASLab Seminar

Abstract: Modern databases underlying large-scale Internet services
guarantee immediate availability and tolerate network partitions at the
expense of providing only weak forms of consistency, commonly dubbed
eventual consistency. Even though the folklore notion of eventual
consistency is very weak, in reality, the semantics and programming
interfaces of systems implementing it can be very subtle. In particular,
such systems can resolve conflicting updates using complex protocols,
called replicated data types (aka CRDTs), and can allow varying kinds of
anomalous behaviour.

So far the semantics provided by eventually consistent systems have been
poorly understood. I will present a novel framework for their formal
specification and discuss how recent nontrivial replicated data type
implementations fit into it. I will then illustrate the utility of the
framework by presenting techniques for proving the correctness of
replicated data type implementations.

This is joint work with Sebastian Burckhardt (Microsoft Research),
Hongseok Yang (University of Oxford) and Marek Zawirski (INRIA &
UPMC-LIP6). The associated paper appeared in POPL’14:
http://software.imdea.org/~gotsman/papers/distrmm-popl14.pdf

Formal Verification of kLIBC with the WP Frama-C plug-in

Accepted at Nasa Formal Methods 2014.

by Nuno Carvalho, Cristiano da Silva Sousa, Jorge Sousa Pinto, and Aaron Tomb

Abstract: This paper presents our results in the formal verification of klibc, a minimalistic C library, using the Frama-C WP tool. We report how we were able to completely verify a significant number of functions from <string.h> and <stdio.h>. We discuss difficulties encountered and describe in detail a problem in the implementation of common <string.h> functions, for which we suggest alternative implementations.

Our work shows that it is presently already viable to verify low-level C code, with heavy usage of pointers. Although the properties proved tend to be shallower as the code becomes of a lower-level nature, it is our view that this is an important direction towards real-world software verification, which cannot be attained by focusing on deep properties of cleaner code, written specifically to be verified.

High-assurance software on unreliable hardware

Software developers usually consider hardware errors as negligible when writing their code. However, the continuous miniaturization of transistors vaticinated by Moore’s law and the attempt to achieve higher frequencies, among other factors, are making circuits less reliable. Despite all the error correction mechanisms this can have a real impact in information consistency leading to misbehaviors called soft errors.

Soft errors are clearly undesirable. But what if we start making hardware that is unreliable by design? Although this may seem an awkward idea, in a recent interview, the Director of the Institute of Microengineering at École Polytechnique Fédérale de Lausanne argues that “We should stop designing perfect circuits” (the full interview can be found here). As he explains, error correction mechanisms conceived to deal with higher error rates may cancel gains in size and power consumption achieved by smaller transistors. Therefore, he proposes following the trend of “good enough engineering” resulting in the design of unreliable circuits which are smaller, less expensive and have lower power consumption. These can be applied in fields which naturally tolerate some inaccuracy such as image and audio processing or graphical systems. For instance, a researcher of MIT’s Media Lab has used an unreliable processor to handle more than 100,000 hours of video recordings.

How this “good enough engineering” trend applies to software is starting to be evaluated. A research group at MIT’s Computer Science and Artificial Intelligence Laboratory has developed a framework called Rely that “enables software developers to specify when errors may be tolerable. The system then calculates the probability that the software will perform as it’s intended.”  This calculus relies on the specification of failure rates of individual machine instructions.

In Computer Science this is a new area that poses new challenges and interesting questions. Can we reason in an abstract model about the resilience of an algorithm to failures when these are intrinsic of the concrete hardware?
A certain parallel can be drawn with real-time systems which obey to timing constraints, but actual time is a property of the run-time system. In the case of programs in presence of failures, the constraints are imposed on the tolerance (margin of error) of the output, which shifts the algorithm analysis to a more quantitative approach.

An abstract model would allow us to analyze which algorithms are more resilient, i.e., less likely to be affected by underlying failures, or to determine how the composition of individual components affects the behavior of the overall system. In the context of HASLab this can help to answer a challenging question: what is the meaning of high-assurance software when the hardware is unreliable?

Link

Analysing interactive devices based on information resource constraints

Analysing interactive devices based on information resource constraints

Analysis of the usability of an interactive system requires both an understanding of how the system is to be used and a means of assessing the system against that understanding. Such analytic assessments are particularly important in safety-critical systems as latent vulnerabilities may exist which have negative consequences only in certain circumstances. Many existing approaches to assessment use tasks or scenarios to provide explicit representation of their understanding of use. These normative user behaviours have the advantage that they clarify assumptions about how the system will be used but have the disadvantage that they may exclude many plausible deviations from these norms. Assessments of how a design fails to support these user behaviours can be a matter of judgement based on individual experience rather than evidence. We present a systematic formal method for analysing interactive systems that is based on constraints rather than prescribed behaviour. These constraints capture precise assumptions about what information resources are used to perform action. These resources may either reside in the system itself or be external to the system. The approach is applied to two different medical device designs, comparing two infusion pumps currently in common use in hospitals. Comparison of the two devices is based on these resource assumptions to assess consistency of interaction within the design of each device.

Link

Reusing models and properties in the analysis of similar interactive devices

Reusing models and properties in the analysis of similar interactive devices

Variations between user interface designs are often relatively subtle and do not always become evident through even relatively thorough user testing. Notwithstanding their subtlety, these differences may be important to the safety or usability of a device. To address this, we are using model checking techniques for checking systematically a set of template properties that are designed to explore important interface characteristics. We have applied the approach to two devices designed to support similar tasks in a clinical setting (two infusion pumps). The analysis of the two infusion pumps provides solid evidence of areas of concern where redesign would reduce complexity and increase the reliability of the two designs.