If ubiquitous computing (ubicomp) is to enhance physical environments then early and accurate assessment of alternative solutions will be necessary to avoid costly deployment of systems that fail to meet requirements. In the context of APEX, a project ending this month, we have developed a prototyping framework that combines a 3D Application Server with a behavior modeling tool. The contribution of this framework is that it allows exhaustive analysis of the behaviour models that drive the prototype while at the same time enabling immersive exploration of a virtual environment simulating the proposed system. The development of prototypes is supported through three layers: a simulation layer (using OpenSimulator); a modelling layer (using CPN Tools) and a physical layer (using external devices and real users). The APEX framework allows movement between these layers to analyze different features, from user experience to user behaviour. The multi layer approach makes it possible to express user behavior in the modelling layer, provides a way to reduce the number of real users needed by adding simulated avatars, and supports user testing of hybrids of virtual and real components as well as exhaustive analysis. This paper demonstrates the approach by means of an example, placing particular emphasis on the simulation of virtual environments, low cost prototyping and the formal analysis capabilities.
We are opening a Post-Doc Position in HASLab supported by a one year grant.
The grant is for a PhD holder, to join our research line in Distributed Data Aggregation and Monitoring. The successful candidate is expected to research on distributed aggregation, e.g. Flow Updating and Extrema Propagation, and explore improvements to allow for faster and more reactive adaptation to changes in the network and monitored values. Prospective candidates should consult this announcement.
HASLab has a very active research team in Large Scale Distributed Systems, working in both Systems and Theory. Our Laboratory is located in the Braga Campus of Universidade do Minho, one of the top state funded research universities in Portugal and in the top 3% of Scimago IBE ranking. We are part of the Associate Laboratory INESC TEC. Braga is very close to the Gerês Natural Park, and has good connections to Porto (and its vibrant cultural life) and the International Airport (30 min shuttle bus to Braga). It also has a very competitive cost of living in comparison to other European cities.
This call is open from June, 16th to June, 30th, 2014.
Reasoning About Eventual Consistency with Replicated Data Types (Alexey Gotsman – IMDEA Software Institute)
Abstract: Modern databases underlying large-scale Internet services
guarantee immediate availability and tolerate network partitions at the
expense of providing only weak forms of consistency, commonly dubbed
eventual consistency. Even though the folklore notion of eventual
consistency is very weak, in reality, the semantics and programming
interfaces of systems implementing it can be very subtle. In particular,
such systems can resolve conflicting updates using complex protocols,
called replicated data types (aka CRDTs), and can allow varying kinds of
So far the semantics provided by eventually consistent systems have been
poorly understood. I will present a novel framework for their formal
specification and discuss how recent nontrivial replicated data type
implementations fit into it. I will then illustrate the utility of the
framework by presenting techniques for proving the correctness of
replicated data type implementations.
This is joint work with Sebastian Burckhardt (Microsoft Research),
Hongseok Yang (University of Oxford) and Marek Zawirski (INRIA &
UPMC-LIP6). The associated paper appeared in POPL’14:
Accepted at Nasa Formal Methods 2014.
by Nuno Carvalho, Cristiano da Silva Sousa, Jorge Sousa Pinto, and Aaron Tomb
Abstract: This paper presents our results in the formal verification of klibc, a minimalistic C library, using the Frama-C WP tool. We report how we were able to completely verify a significant number of functions from <string.h> and <stdio.h>. We discuss difficulties encountered and describe in detail a problem in the implementation of common <string.h> functions, for which we suggest alternative implementations.
Our work shows that it is presently already viable to verify low-level C code, with heavy usage of pointers. Although the properties proved tend to be shallower as the code becomes of a lower-level nature, it is our view that this is an important direction towards real-world software verification, which cannot be attained by focusing on deep properties of cleaner code, written specifically to be verified.
Software developers usually consider hardware errors as negligible when writing their code. However, the continuous miniaturization of transistors vaticinated by Moore’s law and the attempt to achieve higher frequencies, among other factors, are making circuits less reliable. Despite all the error correction mechanisms this can have a real impact in information consistency leading to misbehaviors called soft errors.
Soft errors are clearly undesirable. But what if we start making hardware that is unreliable by design? Although this may seem an awkward idea, in a recent interview, the Director of the Institute of Microengineering at École Polytechnique Fédérale de Lausanne argues that “We should stop designing perfect circuits” (the full interview can be found here). As he explains, error correction mechanisms conceived to deal with higher error rates may cancel gains in size and power consumption achieved by smaller transistors. Therefore, he proposes following the trend of “good enough engineering” resulting in the design of unreliable circuits which are smaller, less expensive and have lower power consumption. These can be applied in fields which naturally tolerate some inaccuracy such as image and audio processing or graphical systems. For instance, a researcher of MIT’s Media Lab has used an unreliable processor to handle more than 100,000 hours of video recordings.
How this “good enough engineering” trend applies to software is starting to be evaluated. A research group at MIT’s Computer Science and Artificial Intelligence Laboratory has developed a framework called Rely that “enables software developers to specify when errors may be tolerable. The system then calculates the probability that the software will perform as it’s intended.” This calculus relies on the specification of failure rates of individual machine instructions.
In Computer Science this is a new area that poses new challenges and interesting questions. Can we reason in an abstract model about the resilience of an algorithm to failures when these are intrinsic of the concrete hardware?
A certain parallel can be drawn with real-time systems which obey to timing constraints, but actual time is a property of the run-time system. In the case of programs in presence of failures, the constraints are imposed on the tolerance (margin of error) of the output, which shifts the algorithm analysis to a more quantitative approach.
An abstract model would allow us to analyze which algorithms are more resilient, i.e., less likely to be affected by underlying failures, or to determine how the composition of individual components affects the behavior of the overall system. In the context of HASLab this can help to answer a challenging question: what is the meaning of high-assurance software when the hardware is unreliable?