Download e-book A Concise Introduction to Propositional Dynamic Logic

Free download. Book file PDF easily for everyone and every device. You can download and read online A Concise Introduction to Propositional Dynamic Logic file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with A Concise Introduction to Propositional Dynamic Logic book. Happy reading A Concise Introduction to Propositional Dynamic Logic Bookeveryone. Download file Free Book PDF A Concise Introduction to Propositional Dynamic Logic at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF A Concise Introduction to Propositional Dynamic Logic Pocket Guide.

The theory behind computation has never been more important. Theory of Computation is a unique textbook that serves the dual purposes of covering core material in the foundations of computing, as well as providing an introduction to some more advanced contemporary topics. This innovative text focuses primarily, although by no means exclusively, on computational complexity theory: the classification of computational problems in terms of their inherent complexity. It incorporates rigorous treatment of computational models, such as deterministic, nondeterministic, and alternating Turing machines; circuits; probabilistic machines; interactive proof systems; automata on infinite objects; and logical formalisms.

Although the complexity universe stops at polynomial space in most treatments, this work also examines higher complexity levels all the way up through primitive and partial recursive functions and the arithmetic and analytic hierarchies. Computing professionals and other scientists interested in learning more about these topics wil. Separability in domain semirings 1 edition published in in English and held by 16 WorldCat member libraries worldwide. Logics of programs by Dexter Kozen 3 editions published between and in English and held by 11 WorldCat member libraries worldwide.

Logic in Computer Science, 10th Symposium on LICS '95 by Dexter Kozen 2 editions published in in English and held by 10 WorldCat member libraries worldwide The proceedings of LICS'95 comprise technical papers on topics in program logics, finite models, model checking and verification, theorem proving and AI, concurrency, semantics, lambda calculus and types, unification and rewriting, and linear logic. There are also four invited presentations: a complete proof system for QPTL; the semantic challenge of Verilog HDL; experience using type theory as a foundation for computer science; and origins and metamorphoses of the trinity--logic, nets, automata.

Lecture 1 - Propositional Logic

No index. Annotation copyright by Book News, Inc. Logics programs by Edmund M Clarke 2 editions published between and in English and held by 10 WorldCat member libraries worldwide. Decidability of systems of set constraints with negative constraits by Alexander Aiken Book 3 editions published in in English and held by 6 WorldCat member libraries worldwide Abstract: "Set constraints are relations between sets of terms. They have been used extensively in various applications in program analysis and type inference. Recently, several algorithms for solving general systems of positive set constraints have appeared.

In this paper we consider systems of mixed positive and negative constraints, which are considerably more expressive than positive constraints alone. We show that it is decidable whether a given such system has a solution. The proof involves a reduction to a number-theoretic decision problem that may be of independent interest.

Audience Level.

[] Propositional dynamic logic with Belnapian truth values

Related Identities. Associated Subjects. We present ReLoC: a logic for proving refinements of programs in a language with higher-order state, fine-grained concurrency, polymorphism and recursive types. In contrast to earlier work on refinements for languages with higher-order state and concurrency, ReLoC provides type- and structure-directed rules for manipulating this judgement, whereas previously, such proofs were carried out by unfolding the judgement into its definition in the model. These more abstract proof rules make it simpler to carry out refinement proofs. Moreover, we introduce logically atomic relational specifications: a novel approach for relational specifications for compound expressions that take effect at a single instant in time.

We demonstrate how to formalise and prove such relational specifications in ReLoC, allowing for more modular proofs. ReLoC is built on top of the expressive concurrent separation logic Iris, allowing us to leverage features of Iris such as invariants and ghost state. We provide a mechanisation of our logic in Coq, which does not just contain a proof of soundness, but also tactics for interactively carrying out refinements proofs.

We have used these tactics to mechanise several examples, which demonstrates the practicality and modularity of our logic. To do so a general, abstract framework for studying behavioural relations taking values over quantales is introduced according to Lawvere's analysis of generalised metric spaces. Barr's notion of relator or lax extension is then extended to quantale-valued relations, adapting and extending results from the field of monoidal topology.

Abstract notions of quantale-valued effectful applicative similarity and bisimilarity are then defined and proved to be a compatible generalised metric in the sense of Lawvere and pseudometric, respectively, under mild conditions. We show how the language of Krivine's classical realizability may be used to specify various forms of nondeterminism and relate them with properties of realizability models.

More specifically, we introduce an abstract notion of multi-evaluation relation which allows us to finely describe various nondeterministic behaviours. This defines a hierarchy of computational models, ordered by their degree of nondeterminism, similar to Sazonov's degrees of parallelism. What we show is a duality between the structure of the characteristic Boolean algebra of a realizability model and the degree of nondeterminism in its underlying computational model.

We introduce open games as a compositional foundation of economic game theory. A compositional approach potentially allows methods of game theory and theoretical computer science to be applied to large-scale economic models for which standard economic tools are not practical. An open game represents a game played relative to an arbitrary environment and to this end we introduce the concept of coutility, which is the utility generated by an open game and returned to its environment.

Open games are the morphisms of a symmetric monoidal category and can therefore be composed by categorical composition into sequential move games and by monoidal products into simultaneous move games. Open games can be represented by string diagrams which provide an intuitive but formal visualisation of the information flows. We show that a variety of games can be faithfully represented as open games in the sense of having the same Nash equilibria and off-equilibrium best responses.

Nakano's later modality allows types to express that the output of a function does not immediately depend on its input, and thus that computing its fixpoint is safe. This idea, guarded recursion, has proved useful in various contexts, from functional programming with infinite data structures to formulations of step-indexing internal to type theory. Categorical models have revealed that the later modality corresponds in essence to a simple reindexing of the discrete time scale.

Unfortunately, existing guarded type theories suffer from significant limitations for programming purposes. These limitations stem from the fact that the later modality is not expressive enough to capture precise input-output dependencies of functions. As a consequence, guarded type theories reject many productive definitions. Combining insights from guarded type theories and synchronous programming languages, we propose a new modality for guarded recursion.

This modality can apply any well-behaved reindexing of the time scale to a type. We call such reindexings time warps. Several modalities from the literature, including later, correspond to fixed time warps, and thus arise as special cases of ours. Query Determinacy Problem is the problem of deciding, for given Q and Q0. Many versions of this problem, for different query languages, were studied in database theory. In this paper we solve a problem stated in [CGLV02] and show that Query Determinacy Problem is undecidable for the Regular Path Queries -- the paradigmatic query language of graph databases.

Categorical quantum mechanics places finite-dimensional quantum theory in the context of compact closed categories, with an emphasis on diagrammatic reasoning. In this framework, two equational diagrammatic calculi have been proposed for pure-state qubit quantum computing: the ZW calculus, developed by Coecke, Kissinger and the first author for the purpose of qubit entanglement classification, and the ZX calculus, introduced by Coecke and Duncan to give an abstract description of complementary observables.

Neither calculus, however, provided a complete axiomatisation of their model. In this paper, we present extended versions of ZW and ZX, and show their completeness for pure-state qubit theory, thus solving two major open problems in categorical quantum mechanics. First, we extend the original ZW calculus to represent states and linear maps with coefficients in an arbitrary commutative ring, and prove completeness by a strategy that rewrites all diagrams into a normal form.

We then extend the language and axioms of the original ZX calculus, and show their completeness for pure-state qubit theory through a translation between ZX and ZW specialised to the field of complex numbers. A proof that all languages satisfying such a k-fold iterated distributive law are in PDL would settle decidability of PDL. Furthermore, we show that this class is decidable. This provides a novel nontrivial decidable subclass of PDL, and demonstrates the viability of the proposed approach to deciding PDL in general.

We present a development of cellular cohomology in homotopy type theory. Cohomology associates to each space a sequence of abelian groups capturing part of its structure, and has the advantage over homotopy groups in that these abelian groups of many common spaces are easier to compute. Cellular cohomology is a special kind of cohomology designed for cell complexes: these are built in stages by attaching spheres of progressively higher dimension, and cellular cohomology defines the groups out of the combinatorial description of how spheres are attached.

Our main result is that for finite cell complexes, a wide class of cohomology theories including the ones defined through Eilenberg-MacLane spaces can be calculated via cellular cohomology. This result was formalized in the Agda proof assistant. We exhibit an algorithm to compute the strongest polynomial or algebraic invariants that hold at each location of a given affine program i. Our main tool is an algebraic result of independent interest: given a finite set of rational square matrices of the same dimension, we show how to compute the Zariski closure of the semigroup that they generate.

Proof nets for MLL unit-free Multiplicative Linear Logic are concise graphical representations of proofs which are canonical in the sense that they abstract away syntactic redundancy such as the order of non-interacting rules. We argue that Girard's extension to MLL1 first-order MLL fails to be canonical because of redundant existential witnesses, and present canonical MLL1 proof nets called unification nets without them.

Cut elimination for unification nets is local and linear time, while Girard's is non-local and exponential time.

Since some unification nets are exponentially smaller than corresponding Girard nets and sequent proofs, technical delicacy is required to ensure correctness is polynomial-time quadratic. Current work extends unification nets to additives and uses them to extend combinatorial proofs [Proofs without syntax, Annals of Mathematics, ] to classical first-order logic.

Satisfiability of Boolean circuits is among the most known and important problems in theoretical computer science. This problem is NP-complete in general but becomes polynomial time when restricted either to monotone gates or linear gates. We go outside Boolean realm and consider circuits built of any fixed set of gates on an arbitrary large finite domain.

From the complexity point of view this is strictly connected with the problems of solving equations or systems of equations over finite algebras. The research reported in this work was motivated by a desire to know for which finite algebras A there is a polynomial time algorithm that decides if an equation over A has a solution. We are also looking for polynomial time algorithms that decide if two circuits over a finite algebra compute the same function.

Although we have not managed to solve these problems in the most general setting we have obtained such a characterization for a very broad class of algebras from congruence modular varieties. This class includes most known and well-studied algebras such as groups, rings, modules and their generalizations like quasigroups, loops, near-rings, nonassociative rings, Lie algebras , lattices and their extensions like Boolean algebras, Heyting algebras or other algebras connected with multi-valued logics including MV-algebras.

This paper seems to be the first systematic study of the computational complexity of satisfiability of non-Boolean circuits and solving equations over finite algebras. Our characterization is given in terms of nice structural properties of algebras for which the problems are solvable in polynomial time. Such algebras have to decompose into two factors: a nilpotent one and a factor that essentially behaves as a finite distributive lattice.

We introduce the first complete and approximately universal diagrammatic language for quantum mechanics. We prove the completeness of this fragment using the recently studied ZW-Calculus, a calculus dealing with integer matrices. The ZX-Calculus is a graphical language for diagrammatic reasoning in quantum mechanics and quantum information theory. The linearity of the diagrams reflects the phase group structure, an essential feature of the ZX-calculus. In particular all the axioms of the ZX-calculus are involving linear diagrams.

This paper provides an alternate characterization of second-order polynomial-time computability, with the goal of making second-order complexity theory more approachable. We rely on the usual oracle machines to model programs with subroutine calls. In contrast to previous results, the use of higher-order objects as running times is avoided, either explicitly or implicitly. Instead, regular polynomials are used. This is achieved by refining the notion of oracle-poly-time computability introduced by Cook. We impose a further restriction on oracle interactions to force feasibility.

Both the restriction and its purpose are very simple: it is well-known that Cook's model allows polynomial depth iteration of functional inputs with no restrictions on size, and thus does not preserve poly-time computability. To mend this we restrict the number of lookahead revisions, that is the number of times a query whose size exceeds that of any previous query may be asked. We prove that this leads to a class of feasible functionals and that all feasible problems can be solved within this class if one is allowed to separate a task into efficiently solvable subtasks.

Formally, the closure of our class under lambda-abstraction and application are the basic feasible functionals. We also revisit the very similar class of strongly poly-time computable operators previously introduced by Kawamura and Steinberg. We prove it to be strictly included in our class and, somewhat surprisingly, to have the same closure property. This is due to the nature of the limited recursion operator: it is not strongly poly-time but decomposes into two such operations and lies in our class. Differential Linear Logic DiLL , introduced by Ehrhard and Regnier, extends linear logic with a notion of linear approximation of proofs.

While DiLL is classical logic, i. We solve this issue by constructing a model of it based on nuclear topological vector spaces and distributions with compact support. This interpretation sheds a new light on the rules of DiLL, as we are able to understand them as the computational principles for the resolution of Linear Partial Differential Equations.

We thus introduce D-DiLL, a deterministic refinement of DiLL with a D-exponential, for which we exhibit a cut-elimination procedure, and a categorical semantics. We recover linear logic and its differential extension DiLL particular case. Assuming that A is a set i. We show an approximation to the question, namely that the fundamental groups of F A are trivial, i. We present the conditional value-at-risk CVaR in the context of Markov chains and Markov decision processes with reachability and mean-payoff objectives.

CVaR quantifies risk by means of the expectation of the worst p-quantile. As such it can be used to design risk-averse systems. We consider not only CVaR constraints, but also introduce their conjunction with expectation constraints and quantile constraints value-at-risk, VaR. We derive lower and upper bounds on the computational complexity of the respective decision problems and characterize the structure of the strategies in terms of memory and randomization.

It was recently shown by van den Broeck at al. We note that the former generalizes the extension of FO2 with a functional relation symbol. We also identify a complete classification of first-order prefix classes according to whether WFOMC is in polynomial time or P1-complete. We revisit many aspects of the syntactic relations between variants of classical linear logic LL and variants of intuitionistic linear logic ILL in the propositional setting. On the one hand, we study different parametric "negative" translations from LL to ILL: their expressiveness, the relations with extensions of LL and their use in the proof theory of LL cut elimination and focusing.

In particular, this bridges the intuitionistic restriction on sequents at most one conclusion and the focusing property of linear logic. On the other hand, we generalise the known partial results about conservativity of LL over ILL, leading for example to a conservativity proof for LL over tensor logic TL. We present a new quasi-polynomial algorithm for solving parity games. It is based on a new bisimulation invariant measure of complexity for parity games, called the register-index, which captures the complexity of the priority assignment.

For fixed parameter k, the class of games with register-index bounded by k is solvable in polynomial time. We show that the register-index of parity games of size n is bounded by O log n and derive a quasi-polynomial algorithm. Our goal is to generalise the separation theorem to this probabilistic setting. Every such model induces a canonical enrichment that we show soundly models a LNL lambda calculus for string diagrams, introduced by Rios and Selinger with primary application in quantum computing.

Our abstract treatment of this language leads to simpler concrete models compared to those presented so far. We also extend the language with general recursion and prove soundness. Finally, we present an adequacy result for the diagram-free fragment of the language which corresponds to a modified version of Benton and Wadler's adjoint calculus with recursion.

Recently Scott showed how to introduce probability by extending these models with random variables. However, to reason about correctness and to add further features, it is useful to reinterpret the construction in a higher-order Boolean-valued model involving a measure algebra. We exhibit a number of key equations satisfied by the terms of our language. The terms are interpreted using a continuation-style semantics with an additional argument, an infinite sequence of coin tosses, which serves as a source of randomness. Finally, we develop a new notion of equality between terms interpreted in a measure algebra, allowing one to reason about terms that may not be equal almost everywhere.

This provides a new framework and reasoning principles for probabilistic programs and their higher-order properties. Markov processes are a fundamental model of probabilistic transition systems and are the underlying semantics of probabilistic programs. We give an algebraic axiomatisation of Markov processes using the framework of quantitative equational logic introduced in [13]. We present the theory in a structured way using work of Hyland et al. We take the interpolative barycentric algebras of [13] which captures the Kantorovich metric and combine it with a theory of contractive operators to give the required axiomatisation of Markov processes both for discrete and continuous state spaces.

This work apart from its intrinsic interest shows how one can extend the general notion of combining effects to the quantitative setting. We introduce a topologically-aware version of tensorial logic, called ribbon tensorial logic. To every proof of the logic, we associate a ribbon tangle which tracks the flow of tensorial negations inside the proof. The translation is functorial: it is performed by exhibiting a correspondence between the notion of dialogue category in proof theory and the notion of ribbon category in knot theory. Our main result is that the translation is also faithful: two proofs are equal modulo the equational theory of ribbon tensorial logic if and only if the associated ribbon tangles are equal up to topological deformation.

This "proof-as-tangle" theorem may be understood as a coherence theorem for balanced dialogue categories, and as a mathematical foundation for topological game semantics. Concurrent separation logic CSL is a specification logic for concurrent imperative programs with shared memory and locks. In this paper, we develop a concurrent and interactive account of the logic inspired by asynchronous game semantics. To every program C, we associate a pair of asynchronous transition systems [C]S and [C]L which describe the operational behavior of the Code when confronted to its Environment or Frame both at the level of machine states S and of machine instructions and locks L.

We advocate that this provides a clean and conceptual explanation for the usual soundness theorem of CSL, including the absence of data races.

Modal Logic: A Contemporary View

We present a sound and complete axiomatisation of the Riesz modal logic extended with one inductively defined operator which allows the definition of threshold operators. This logic is capable of interpreting the bounded fragment of the logic probabilistic CTL over discrete and continuous Markov chains. However, the property of normalization and therefore the one of soundness was only conjectured. On the one hand, we take advantage of a variant of Krivine classical realizability that we developed to prove the normalization of classical call-by-need [20].

On the other hand, we benefit from dLtp, a classical sequent calculus with dependent types in which type safety is ensured by using delimited continuations together with a syntactic restriction [19]. We answer in this paper an open question known as the "Gamma question" , related to the recent notion of coarse computability, which stems from complexity theory. The question was formulated by Andrews, Cai, Diamondstone, Jockusch and Lempp in "Asymptotic density, computable traceability and 1-randomness" [1].

The Gamma value of an oracle set measures to what extent each set computable with the oracle is approximable in the sense of density by a computable set. The closer to 1 this value is, the closer the oracle is to being computable. In this paper, we pursue some work initiated by Monin and Nies in "A unifying approach to the Gamma question" [19]. Using notions from computability theory, developed by Monin and Nies, together with some basic techniques from the field of error-correcting codes, we are able to give a negative answer to this question.

The proof we give also provides an answer to a related question, asked by Denis Hirschfeldt in the expository paper "Some questions in computable mathematics" [12]. We also solve the Gamma problem for bases other than 2, answering another question of Monin and Nies. We set both constructions within a logical predicates style theory for display map categories where we show that 'quasifibred' versions of dependent products and universes suffice to construct their standard counterparts.

To support the logic required for dependent products in the first construction, we propose a new semantic notion of finite sum for dependent types, generalizing finitely-complete extensive categories. The second avoids extensivity assumptions using biproducts in a Kleisli category for a fibred additive monad.

Additionally, it has introduced some genuinely new syntactic and semantic programming concepts. In this paper we study one such new concept, the ability to extract and manipulate the state of a computation graph. This feature allows the convenient specification of parameterised models by freeing the programmer of the bureaucracy of parameter management, while still permitting the use of generic, model-independent, search and optimisation algorithms.

We study this new language feature, which we call 'graph abstraction' in the context of the call-by-value lambda calculus, using the recently developed Dynamic Geometry of Interaction formalism. We give a simple type system guaranteeing the safety of graph abstraction, and we also show the safety of critical language properties such as garbage collection and the beta law.

The semantic model suggests that the feature could be implemented in a general-purpose functional language reasonably efficiently. Existing approaches to temporal verification of higher-order functional programs have either sacrificed compositionality in favor of achieving automation or vice-versa. The first contribution is a novel type-and-effect system capable of expressing dependent temporal effects, which are fixpoint logic predicates on event sequences and program values, extending beyond the non-dependent temporal effects used in recent proposals.

Temporal effects facilitate compositional reasoning whereby the temporal behavior of program parts are summarized as effects and combined to form those of the larger parts. As a second contribution, we show that type checking and typability for the type system can be reduced to solving first-order fixpoint logic constraints. In the first part of the talk, we show how deep learning over programs is used to tackle tasks like code completion, code summarization, and captioning. We describe a general path-based representation of source code that can be used across programming languages and learning tasks, and discuss how this representation enables different learning algorithms.

In the second part, we describe techniques for extracting interpretable representations from deep models, shedding light on what has actually been learned in various tasks. Many complex applications which have to tackle simulation and data analysis are a prerogative to High-Performance Computing. And so, high data volumes and performance requirements have held the parallel community developers with a clasp to deliver scalable, accurate, and most importantly bug-free solutions.

Message Passing MP is a prominent programming model via which nodes of a distributed system communicate. Writing correct and bug-free parallel programs are hard because the participating entities interact in such non-deterministic ways that are difficult to anticipate before-hand. Programmers have to predict messaging patterns, perform data marshaling and compute locations for coordination in order to design correct and efficient programs. Unfortunately, there is a shortage of verification and formal-method techniques that can guarantee the development of correct solutions.

Since a few years neural networks are the most powerful tool for perception tasks, especially in image processing, and superior performances in these tasks sparked the desire to use them in safety-critical systems, e. However, verifying the safety of systems that are using neural networks remains a challenge, because neural networks raise certain dependability concerns such as adversarial inputs. Resulting from this need, the research topic of formal verification of neural networks emerged.

We identify some of the main challenges of this field and discuss how to address them. Requirements are informal and semi-formal descriptions of the expected behavior of a system. However, manual checks are error-prone and time-consuming. With the increasing complexity of cyber-physical systems and the need of operating in safety- and security-critical environments, it became essential to automatize the consistency check of requirements and build artifacts to help system engineers in the design process.

First-order resolution has been used for type inference for many years, including in Hindley-Milner type inference, type-classes, and constrained data types, to name but a few. Dependent types are a new trend in functional languages. In this paper, we show that proof-relevant first-order resolution can play an important role in automating type inference and term synthesis for dependently typed languages. We propose a calculus that translates type inference and term synthesis problems in a dependently typed language to a logic program and a goal in the proof-relevant first-order Horn clause logic.

The computed answer substitution and proof term then provide a solution to the given type inference and term synthesis problem. We prove the decidability and soundness of our method. We propose an interpretation of the first-order answer set programming FOASP in terms of intuitionistic proof theory. Our construction reveals a close similarity between constructive provability and stable entailment, or equivalently, between the construction of an answer set and an intuitionistic refutation. We address the problem of verifying the satisfiability of Constrained Horn Clauses CHCs based on theories of inductively defined data structures, such as lists and trees.

We propose a transformation technique whose objective is the removal of these data structures from CHCs, hence reducing their satisfiability to a satisfiability problem for CHCs on integers and booleans. We propose a transformation algorithm and identify a class of clauses where it always succeeds. We also consider an extension of that algorithm, which combines clause transformation with reasoning on integer constraints.

Via an experimental evaluation we show that our technique greatly improves the effectiveness of applying the Z3 solver to CHCs. Recent work on the practical aspects on the modal logic S5 satisfiability problem showed that using a SAT-based approach outperforms other existing approaches.

In this work, we go one step further and study the related minimal S5 satisfiability problem MinS5-SAT , the problem of finding an S5 model, a Kripke structure, with the smallest number of worlds.

Reasoning about Dynamic Policies

Finding a small S5 model is crucial as soon as the model should be presented to a user, displayed on a screen for instance. SAT-based approaches tend to produce S5-models with a large number of worlds, thus the need to minimize them. That optimization problem can obviously be solved as a pseudo-Boolean optimization problem. We show in this paper that it is also equivalent to the extraction of a maximal satisfiable set MSS. We show that a new incremental, SAT-based approach can be proposed by taking into account the equivalence relation between the possible worlds on S5 models.

That specialized approach presented the best performance on our experiments conducted on a wide range of benchmarks from the modal logic community and a wide range of pseudo-Boolean and MaxSAT solvers. Our results demonstrate once again that domain knowledge is key to building efficient SAT-based tools. In this paper, we describe a method for solving some open problems in design theory based on SAT solvers. Modern SAT solvers are efficient and can produce unsatisfiability proofs.

However, the state-of-the-art SAT solvers cannot solve so-called large set of idempotent quasigroups. Two idempotent quasigroups over the same set of elements are said to be disjoint if at any position other than the main diagonal, the two elements from the two idempotent quasigroups at the same position cannot be the same. We will use a finite model generator to help the SAT solver avoiding symmetric search spaces, and take both advantages of first order logic and the SAT techniques.

Furthermore, we use an incremental search strategy to find a maximum number of disjoint idempotent quasigroups, thus decide the non-existence of large sets. The experimental results show that our method is highly efficient. The use of symmetry breaking is crucial to allow us to solve some instances in reasonable time. Constrained counting is important in domains ranging from artificial intelligence to software analysis.

There are already a few approaches for counting models over various types of constraints. Recently, hashing-based approaches achieve success but still rely on solution enumeration. In this paper, a new probabilistic approximate model counter is proposed, which is also a hashing-based universal framework, but with only satisfiability queries. A dynamic stopping criteria, for the new algorithm, is presented, which has not been studied yet in previous works of hashing-based approaches.

Although the new algorithm lacks theoretical guarantee, it works well in practice. Empirical evaluation over benchmarks on propositional logic formulas and SMT BV formulas shows that the approach is promising. In order to engage them with any findings, a conversation between ourselves and stakeholders within EMVCo sub-groups has been established through formal simulation of specific protocol runs using VDMSL.

The results have been productive and substantial: a number of corrections were suggested, and most importantly a large number of hidden assumptions about types, APIs, and other system parts have been carefully documented. Our aim, which we have achieved, is to influence the specification process prior to public release later in The work unravelled many technical issues in terms of design decisions, identified tool bugs, as well as the limits of Overture as a tool for industrial use at this scale.

We hope to discuss these issues and make some useful suggestions. This session focuses on methodology and prespectives on the use of the Overture tools in practice. In this paper we would like to demonstrate that it actually can make sense to move in the opposite direction. We present a case study where a requirement change late in the project deemed the need for distribution and concurrency aspects unnecessary. The advan- tage of this transformation is to reduce complexity and prepare the model for a combined commercial and research setting.

The cloud is quickly becoming the principle means by which software is delivered into the hands of users. This has not only changed the shipping mechanism, but the whole process by which software is de- veloped. The application of lean manufacturing principles to software engineering, and the growth of continuous integration and delivery, have contributed to the end-to-end automation of the development lifecycle.

Gone are the days of quarterly releases of monolithic systems; the cloud- based, software as a service is formed of hundred or even thousands of microservices with new versions available to the end user on a daily basis. If formal methods are to be relevant in the world of cloud computing, we must be able to apply the same principles; enabling easy componentiza- tion of specifications and the integration of the processes around those specifications into the fully mechanized process.

In this paper we present tools that enable VDM-SL specifications to be constructed, tested and documented in the same way as their implementation through the use of a VDM Gradle plugin. By taking advantage of existing binary repository systems we will show that known dependency resolution instruments can be used to facilitate the breakdown of specifications and enable the easy re-use of foundational components.

We also suggest that the deployment of those components to central repositories could reduce the learning curve of formal methods and concentrate efforts on the innovative. Fur- thermore, we propose a number of additional tools and integrations that we believe could increase the use of VDM-SL in the development of cloud software. We leverage reinforcement learning RL to speed up numerical program analysis. The key insight is to establish a correspondence between concepts in RL and program analysis. For instance, a state in RL maps to an abstract program state in the analysis, an action maps to an abstract transformer, and at every state, we have a set of sound transformers actions that represent different trade-offs between precision and performance.

At each iteration, the agent analysis in our case uses a policy that is learned offline by RL to decide on the transformer which minimizes the loss of precision at fixpoint while increasing analysis performance. Our approach leverages the idea of online decomposition applicable to popular numerical abstract domains to define a space of approximate transformers with varying degrees of precision and performance.

Using a suitably designed set of features that capture key properties of both, abstract program states and available actions, we then apply Q-learning with linear function approximation to compute an optimized context-sensitive policy that chooses transformers during analysis.

We implemented our approach for the notoriously expensive Polyhedra domain and evaluated it on a set of Linux device drivers that are expensive to analyze. We present an alternative Double Description representation for the domain of NNC not necessarily closed polyhedra, together with the corresponding Chernikova-like conversion procedure. The representation uses no slack variable at all and provides a solution to a few technical issues caused by the encoding of an NNC polyhedron as a closed polyhedron in a higher dimension space.

A preliminary experimental evaluation shows that the new conversion algorithm is able to achieve significant efficiency improvements. Deductive verification of software has not yet found its way into industry, as complexity and scalability issues require highly specialized experts. The long-term perspective is, however, to develop verification tools aiding industrial software developers to find bugs or bottlenecks in software systems faster and more easily.

The KeY project constitutes a framework for specifying and verifying software systems, aiming at making formal verification tools applicable for mainstream software development. To help the developers of KeY, its users, and the deductive verification community, we summarize our experiences with KeY 2. While we describe how we bridged informal and formal specification, we also exhibit accompanied challenges that we encountered.

Our experiences are that a in principle, deductive verification for API-like code bases is feasible, but requires high expertise, b developing formal specifications for existing code bases is still notoriously hard, and c the under-specification of certain language constructs in Java is challenging for tool builders. Our initial effort in specifying parts of OpenJDK 6 constitutes a stepping stone towards a case study for future research.

Given a relational specification between Boolean inputs and outputs, the goal of Boolean functional synthesis is to synthesize each output as a function of the inputs such that the specification is met. In this paper, we first show that unless some hard conjectures in complexity theory are falsified, Boolean functional synthesis must necessarily generate exponential-sized Skolem functions, thereby requiring exponential time, in the worst-case. Given this inherent hardness, what does one do to solve the problem? We present a two-phase algorithm for Boolean functional synthesis, where the first phase is efficient both in terms of time and sizes of synthesized functions, and solves an overwhelming majority of benchmarks.

To explain this surprisingly good performance, we provide a sufficient condition under which the first phase must produce exact correct answers. When this condition fails, the second phase builds upon the result of the first phase, possibly requiring exponential time and generating exponential-sized functions in the worst-case.

Detailed experimental evaluation shows our algorithm to perform better than state-of-the-art techniques for the vast majority of benchmarks. Program synthesis is the mechanized construction of soft- ware. One of the main difficulties is the efficient exploration of the very large solution space, and tools often require a user-provided syntactic restriction of the search space.

Top Selected Products and Reviews

We propose a new approach to program synthesis that combines the strengths of a counterexample-guided in- ductive synthesiser with those of a theory solver, exploring the solution space more efficiently without relying on user guidance. In this paper, we focus on one particular application of CEGIS T , namely the synthesis of programs that require non-trivial constants, which is a fundamentally difficult task for state-of-the-art synthesisers.

We present two exemplars, one based on Fourier-Motzkin FM variable elimination and one based on first-order satisfiability. We demonstrate the practical value of CEGIS T by automatically synthesizing programs for a set of intricate benchmarks. We study the reactive synthesis problem for hyperproperties given as formulas of the temporal logic HyperLTL. Hyperproperties generalize trace properties, i. Typical examples are information-flow policies like noninterference, which stipulate that no sensitive data must leak into the public domain.


  1. Polymer Thermodynamics: Liquid Polymer-Containing Mixtures?
  2. The Oxford Encyclopedia of Archaeology in the Near East - Volume 2;
  3. 2. Game Structure and Game Logics!
  4. Butchery of the Mountain Man (First Mountain Man, Book 41)?
  5. Highlights of Logic, Games and Automata.
  6. Evaluation Copy.
  7. Fichte: The Self and the Calling of Philosophy, 1762-1799!

Beyond these fragments, the synthesis problem immediately becomes undecidable. For universal HyperLTL, we present a semi-decisionprocedure that constructs implementations and counterexamples up to a given bound. We report encouraging experimental results obtained with a prototype implementation on example specifications with hyperproperties like symmetric responses, secrecy, and information flow. Reactive synthesis is a paradigm for automatically building correct-by-construction systems that interact with an unknown or adversarial environment.

We study how to do reactive synthesis when part of the specification of the system is that its behavior should be random. Randomness can be useful, for example, in a network protocol fuzz tester whose output should be varied, or a planner for a surveillance robot whose route should be unpredictable. However, existing reactive synthesis techniques do not provide a way to ensure random behavior while maintaining functional correctness. Towards this end, we generalize the recently-proposed framework of control improvisation CI to add reactivity. The resulting framework of reactive control improvisation provides a natural way to integrate a randomness requirement with the usual functional specifications of reactive synthesis over a finite window.

We theoretically characterize when such problems are realizable, and give a general method for solving them. For specifications given by reachability or safety games or by deterministic finite automata, our method yields a polynomial-time synthesis algorithm. For various other types of specifications including temporal logic formulas, we obtain a polynomial-space algorithm and prove matching PSPACE-hardness results. We show that all of these randomized variants of reactive synthesis are no harder in a complexity-theoretic sense than their non-randomized counterparts.

Proof by coupling is a classical technique for proving properties about pairs of randomized algorithms by carefully relating or coupling two probabilistic executions. In this paper, our goal is to automatically construct such proofs for programs.