Choices / Decisions Sample Clauses
The "Choices / Decisions" clause defines how and by whom key selections or determinations are made under the agreement. It typically outlines the process for making decisions, such as which party has authority to choose between options, how notice of a decision must be given, or what happens if the parties cannot agree. This clause ensures that there is a clear, agreed-upon method for resolving issues that require a choice, thereby preventing disputes and delays by allocating decision-making authority and setting procedures for exercising it.
Choices / Decisions. For constraint-based deadlock checking we had the choice of either generating the deadlock freedom proof obligation with ProB or using ProB as a disprover on a generated proof obligation. Currently, the core of Rodin does not generate the deadlock freedom proof obligation. The flow plugin can be used to generate deadlock freedom proof obligations. The advantage, however, of generating them within ProB are the following: • ProB knows which parts of the axioms are theorems (and can thus be ignored; they are often added for simplifying proof but can make constraint solving more difficult) • the techniques can also be applied to classical B For record detection we decided not to use any potential "hints" provided by the records plugin, but infer the information from the axioms. In this way, the improvement can also be applied to records generated manually (as was the case in the Bosch case study) or in a classical B setting.
Choices / Decisions. 5.3.1. Enhancement and Maintenance of Existing UML-B
Choices / Decisions. 8.3.1. Flow plug-in
Choices / Decisions. On the Core Rodin Platform side, implementing mathematical extensions required to make some parts of the code extensible, that were not designed to be so, namely the lexer and the parser. We were using tools that automatically generated them from a fixed grammar description, so we had to change to other technologies. A study [1] has been made on available technologies. The ▇▇▇▇▇ algorithm was selected for its adequation with the purpose and it did not have the drawbacks of other technologies: • foreign language integration • overhead due to over generality After a mocking up phase to verify feasibility, the ▇▇▇▇▇ algorithm has been confirmed as the chosen option and implemented in the Rodin Platform. Besides, we wanted to set up a way to publish and share theories for Rodin users, in order to constitute a database of pre-built theories for everyone to use and contribute. This has been realised by adding a new tracker on SourceForge site ([2]). The Theory plug-in contributes a theory construct to the Rodin database. Theories were used in the Rule-based Prover (before it was discontinued) as a placeholder for rewrite rules. Given the usability advantages of the theory component, it was decided to use it to define mathematical extensions (new operators and new datatypes). Another advantage of using the theory construct is the possibility of using proof obligations to ensure that the soundness of the formalism is not compromised. Proof obligations are generated to validate any properties of new operators (e.g., associativity). With regards to prover extensions, it was decided that the Theory plug-in inherits the capabilities to define and validate rewrite rules from the Rule-based Prover. Furthermore, support for a simple yet powerful subset of inference rules is added, and polymorphic theorems can be defined within the same setting. Proof obligations are, again, used as a filter against potentially unsound proof rules. Records plug-in required the extension of the Rodin database with the new constructs to support structured types. On the other hand the Event-B language itself did not support extension at that time. For that reason the decision was made to address extensibility problem at the lowest level possible, which was Rodin database, but to model structured types using standard Event-B notation at the level below. The translation from extended to standard syntax has been entrusted to the static checker, that was also extended for this purpose. Thus the ...
Choices / Decisions. SVN Teamwork The desired objective of a plug-in that would bring support for Subversion in Rodin was to make a Rodin project compatible with standard SVN interface. Due to nature of the Rodin resource management, in particular the use of Rodin database and non-XMI serialisation, it turned out a hard task. A solution to this difficulty was to provide an alternative serialisation method, that would be compatible with Subversion interface. XMI serialisation has been chosen in the final plug-in, which together with Event-B EMF framework provides a shareable copy of the resources of a Rodin project and takes care of synchronisation between two. • Decomposition The two styles of decomposition use as criteria of partition two of the most important elements of an Event-B model: variables and events. The plug-in supports the two styles and allows the decomposition through a stepwise wizard or through a decomposition file (with extension .dcp) that can be stored are re-run whenever necessary. For the shared event decomposition, the user needs to selects which variables are allocated to which sub-component. For the share variable decomposition, the user selects which events with be part of which sub-component. The rest of the sub-component (which is no more than an ordinary machine) is built automatically (after some validations).
Choices / Decisions. For MBT using state-based models, test generation algorithms usually traverse the state space starting in an initial state and being guided by a certain coverage criteria (e.g. state coverage) collecting the execution paths in a test suite. Event-B models do not have an explicit state space, but its state space is given by value of the variables and the state is changed by the execution of events that are enabled in that state. ProB tool has a good grip of the state space, being able to explore it, visualize it, and verify various properties using model checking algorithms. Such model checking algorithms can be used to explore the state space of Event-B models using certain coverage criteria (e.g. event coverage) and thus generating test cases along the traversal. Moreover, the input data that allows to trigger the different events provides the test data associated with the test cases. Given the above considerations, the following choices and decisions have been made: • Using explicit model-checking: First, model-checking algorithms described in the previous paragraph were implemented and applied to message choreography models from SAP. They work fine for models with data with a small finite range. However, in case of variables with a large range (e.g. integers), the known state space explosion problem creates difficulties, since the model-checker explores the state enumerating the many possible values of the variables. This required to consider different approaches as described below. • Using constraint solving: To avoid the state space explosion due to the large bounds of the variables, another approach ignores these values in the first step and uses the model-checker only to generate abstract test cases satisfying the coverage criteria. However, these paths may be infeasible in the concrete model due to the data constraints along the path. The solution is to represent the intermediate states of the path as existentially quantified variables. The whole path is then represented as a single predicate consisting of the guards and before-after predicates of its events. ProB's improved constraint solver (see Model Animation[1] ) is then used to validate the path feasibility and find appropriate data satisfying the constraints. • Using meta-heuristic search algorithms: As an alternative to the above constraint solving approach, we investigated also a recent approach to test data generation using meta-heuristic search algorithms (e.g. evolutionary and genetic a...
Choices / Decisions. Over the reporting period of the last year, the following three different solutions were defined, implemented, and tested: • Using abstraction and constraint-solving: To avoid the state space explosion due to the large bounds of the variables, this approach ignores these values in the first step and uses the ProB model-checker only to generate abstract test cases satisfying the given coverage criteria. However, these paths may be infeasible in the concrete model due to the data constraints along the path. The solution is to represent the intermediate states of the path as existentially quantified variables. The whole path is thus represented as a single predicate consisting of the guards and before-after predicates of its events. ProB's improved constraint solver is then used to validate the path feasibility and find appropriate test data satisfying the constraints. • Using model-learning: Event-B models are essentially abstract state machines. However, their states are not given explicitly; instead they can be implicitly derived from the values of the model variables. Since the notion of state is at the heart of MBT, we provide a model-learning approach that uses the notion of cover automata to iteratively construct a subset of a state space together with an associated test suite. The iterative nature of the algorithm fits well with the notion of Event-B refinement. It can also be adapted to work with decomposed Event-B models. • Using search-based techniques: We also investigated meta-heuristics search approaches, including evolutionary algorithms, in the attempt to tackle large variable domains and state spaces in the search of test data. We worked on two applications: first, generation of test data for a given sequence of events (test case) and second, a generalisation to generation of test suites satisfying different coverage criteria. Furthermore, the output of the above procedures was complemented by test suite optimisations (based on evolutionary techniques) according to different test coverage criteria. This was motivated by the fact that the test generators sometimes produces too many feasible tests. This may be useful for certain types of intensive testing (e.g. conformance testing), but in other scenarios it is more appropriate to generate smaller test suites for lighter coverage criteria (e.g. each event is executed at least once).
Choices / Decisions. Considering the demonstrator as a baseline, we can list the new features as follows: • Tasking Event-B is now integrated with the Event-B explorer. It uses the extensibility mechanism of Event-B EMF (In the previous version it was a separate model). • Tasking Event-B is now integrated with the Event-B model editors. Tasking Event-B features can now be edited in the same place as the other Event-B features. • We have the ability to translate to Java and C, in addition to Ada source code; and the source code is placed in appropriate files within the project. • We use theories to define translations of the Event-B mathematical language (Theories for Ada and C are supplied). • We use the theory plug-in as a mechanism for defining new data types , and the translations to target data types. • The Tasking Event-B to Event-B translator is fully integrated. The previous tool generated a copy of the project, but this is no longer the case. • The translator is extensible. • The composed machine component is used to store event 'synchronizations'. • Minimal use is made of the EMF tree editor in Rose. These evolutions will be detailed hereafter. Moreover, the code generators have been completely re-written. The translators are now implemented using Java only. In our previous work we attempted to make use of the latest model-to-model transformation technology, available in the Epsilon tool set[7], but we decided to revert to Java since Epsilon lacked the debugging and productivity features of the Eclipse Java editor.
Choices / Decisions. The tasks performed on the decomposition plug-in were focused on consolidation.
Choices / Decisions. Aggressive compression can also induce a performance penalty. The new default mode was chosen such that there should be no performance penalty, with reduced memory usage. (Indeed, the time for compression is regained by reduced time to store and retrieve the states.) A more aggressive setting can be forced by -p COMPRESSION TRUE. This will further reduce memory consumption, but may increase runtime (although quite often it does not).
