Parameter Synthesis in Markov Models

Markov models assume a fixed transition probability. However, often these transition probabilities are based on expert estimates or learned from data. It is therefore natural to consider symbolic probabilities in the form of parameters, and investigate for which parameter values a model satisfies a given specification.

Overview

My thesis [1] covers various aspects of parameter synthesis in Markov models. It also covers aspects of structural synthesis via sketching. We recently published a survey [2] that gives a gentle introduction into the topic.

Feasibility synthesis

In feasibility synthesis, the question is whether parameter values can be substituted with constant values such that the obtained Markov model satisfies a goal. This setting is particularly relevant for controllable parameters, e.g., if these parameters encode the bias of a coin flip in a randomized algorithm.

Prophesy [3] yields some support for this, but the methods based on convex optimization techniques [4], [5], [6], in particular the latter based on SCP, is currently state-of-the-art. Alternatively, one can use gradient descent methods [7].

Verification

For the verification problem, we want to show that for all parameter values, a particular property of the induced model holds. This setting is thus dual to the feasibility problem. While SMT-based approaches can work for small problems [3], an abstraction-based method called parameter lifting is much more succesful [8].

The verification problem remains hard. We observe that many systems naturally exhibit monotonic behavior in the parameters, e.g., a network protocol is more reliable with more reliable network channels. We have developed methods to detect monotonicity [9], and to exploit monotonicity to speed up parameter lifting [10].

Distributions over parameters

Rather than a controllable or adversarial view on parameters, parameters sometimes describe environment conditions that are themselves uncertain. For example, parameters in a pMC may describe the transition probabilities in dependence of the wind speed. The wind and thus the value of the parameter is determined by a distribution. In [11], [12] we consider scenario-optimization to determine with which probability the parameter values induce a pMC that satisfies a declarative constraint. We consider a variation of this technique for handling multiple objectives in [13].

Connection to Policies for POMDPs

Policies that optimize reachability probabilities in partially observable MDPs (POMDPs) require infinite memory. A natural restriction is to consider finite memory policies. As the memory can always be considered part of the POMDP state space, considering finite memory policies can be reduced to the special case of memoryless policies. Memoryless policies in POMDPs take in all states with observation x some action with probability p. Thus, the induced Markov chain of an POMDP with an arbitrary memoryless policy can be described by a parametric Markov chain, and thus the problem of finding a memoryless POMDP policy can be reduced to finding parameter values in a pMC. The construction can be inverted, and thus, the problems are equally hard. [14].

Complexity

We surveyed the computational complexity in [15], [16].

The feasibility question for pMCs is (assuming nonstrict inequalities) ETR-complete. ETR (Existential Theory of the Reals) is a complexity class inbetween NP and PSPACE and the conceptually simplest ETR complete problem is whether a multivatiate polynomial has a real root. The ETR membership follows from a straightforward generalization of the Bellman equations to the parametric case, in which we obtain a linear equation system over the field of rational functions. ETR hardness is already obtained for a simple class of acyclic pMCs.

The pMDP case is also ETR-complete (independent of the used inequality). Hardness follows trivially from the pMC case, whereas the membership requires a generalization of the LP formulation for reachabilty in MDPs.

Despite these results, it is noticeable that for any fixed number of parameters, there is a polynomial algorithm for solving feasibility in pMCs [17].

References

  1. Sebastian Junges, “Parameter synthesis in Markov models,” PhD thesis, RWTH Aachen University, Germany, 2020.
  2. Nils Jansen, Sebastian Junges, and Joost-Pieter Katoen, “Parameter Synthesis in Markov Models: A Gentle Survey,” in Principles of Systems Design, 2022.
    DOI: 10.1007/978-3-031-22337-2_20 arXiv: https://doi.org/10.48550/arXiv.2207.06801
  3. Christian Dehnert et al., “PROPhESY: A PRObabilistic ParamEter SYnthesis Tool,” in CAV, 2015.
  4. Murat Cubuktepe et al., “Sequential Convex Programming for the Efficient Verification of Parametric MDPs,” in TACAS, 2017.
  5. Murat Cubuktepe, Nils Jansen, Sebastian Junges, Joost-Pieter Katoen, and Ufuk Topcu, “Synthesis in pMDPs: A Tale of 1001 Parameters,” in ATVA, 2018.
  6. Murat Cubuktepe, Nils Jansen, Sebastian Junges, Joost-Pieter Katoen, and Ufuk Topcu, “Convex Optimization for Parameter Synthesis in MDPs,” IEEE Trans. Autom. Control., 2022.
    DOI: 10.1109/TAC.2021.3133265 arXiv: 2107.00108 supplemental material
  7. Jip Spel, Linus Heck, Sebastian Junges, Joost-Pieter Katoen, and Joshua Moerman, “Gradient-Descent for Randomized Controllers under Partial Observability,” in VMCAI, 2022.
    DOI: 10.1007/978-3-030-94583-1_7 arXiv: abs/2111.04407 supplemental material
  8. Tim Quatmann, Christian Dehnert, Nils Jansen, Sebastian Junges, and Joost-Pieter Katoen, “Parameter Synthesis for Markov Models: Faster Than Ever,” in ATVA, 2016.
  9. Jip Spel, Sebastian Junges, and Joost-Pieter Katoen, “Are Parametric Markov Chains Monotonic?,” in ATVA, 2019.
    DOI: 10.1007/978-3-030-31784-3_28 supplemental material
  10. Jip Spel, Sebastian Junges, and Joost-Pieter Katoen, “Finding Provably Optimal Markov Chains,” in TACAS, 2021.
    DOI: 10.1007/978-3-030-72016-2_10
  11. Murat Cubuktepe, Nils Jansen, Sebastian Junges, Joost-Pieter Katoen, and Ufuk Topcu, “Scenario-Based Verification of Uncertain MDPs,” in TACAS, 2020.
    DOI: doi.org/10.1007/978-3-030-45190-5_16
  12. Thom Badings, Murat Cubuktepe, Nils Jansen, Sebastian Junges, Joost-Pieter Katoen, and Ufuk Topcu, “Scenario-Based Verification of Uncertain Parametric MDPs,” STTT, 2022.
    DOI: 10.1007/s10009-022-00673-z arXiv: 2112.13020 supplemental material
  13. Thom Badings, Nils Jansen, Sebastian Junges, Marielle Stoelinga, and Matthias Volk, “Sampling-Based Verification of CTMCs with Uncertain Rates,” in CAV, 2022.
    DOI: 10.1007/978-3-031-13188-2_2 arXiv: 2205.08300 supplemental material repo
  14. Sebastian Junges et al., “Finite-State Controllers of POMDPs using Parameter Synthesis,” in UAI, 2018.
    URL: http://auai.org/uai2018/proceedings/papers/195.pdf
  15. Tobias Winkler, Sebastian Junges, Guillermo A. Pérez, and Joost-Pieter Katoen, “On the Complexity of Reachability in Parametric Markov Decision Processes,” in CONCUR, 2019.
    DOI: 10.4230/LIPIcs.CONCUR.2019.14
  16. Sebastian Junges, Joost-Pieter Katoen, Guillermo A. Pérez, and Tobias Winkler, “The complexity of reachability in parametric Markov decision processes,” J. Comput. Syst. Sci., 2021.
    DOI: 10.1016/j.jcss.2021.02.006 arXiv: 2009.13128
  17. Christel Baier, Christian Hensel, Lisa Hutschenreiter, Sebastian Junges, Joost-Pieter Katoen, and Joachim Klein, “Parametric Markov chains: PCTL complexity and fraction-free Gaussian elimination,” Inf. Comput., 2020.