Thumbnail
Access Restriction
Subscribed

Author Agrawal, Manindra ♦ Akshay, S. ♦ Genest, Blaise ♦ Thiagarajan, P S
Source ACM Digital Library
Content type Text
Publisher Association for Computing Machinery (ACM)
File Format PDF
Copyright Year ©2015
Language English
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science
Subject Keyword LTL logic ♦ Markov chains ♦ Approximate model checking ♦ Discretization
Abstract A finite-state Markov chain $\textit{M}$ can be regarded as a linear transform operating on the set of probability distributions over its node set. The iterative applications of $\textit{M}$ to an initial probability distribution $μ_{0}$ will generate a trajectory of probability distributions. Thus, a set of initial distributions will induce a set of trajectories. It is an interesting and useful task to analyze the dynamics of $\textit{M}$ as defined by this set of trajectories. The novel idea here is to carry out this task in a $\textit{symbolic}$ framework. Specifically, we discretize the probability value space [0,1] into a finite set of intervals $\textit{I}$ = ${I_{1},$ $I_{2},...,$ $I_{m}}.$ A concrete probability distribution μ over the node set {1, 2,..., $\textit{n}}$ of $\textit{M}$ is then symbolically represented as $\textit{D},$ a tuple of intervals drawn from $\textit{I}$ where the $\textit{i}th$ component of $\textit{D}$ will be the interval in which $μ(\textit{i})$ falls. The set of discretized distributions $\textit{D}$ is a finite alphabet. Hence, the trajectory, generated by repeated applications of $\textit{M}$ to an initial distribution, will induce an infinite string over this alphabet. Given a set of initial distributions, the symbolic dynamics of $\textit{M}$ will then consist of a language of infinite strings $\textit{L}$ over the alphabet $\textit{D}.$ Our main goal is to verify whether $\textit{L}$ meets a specification given as a linear-time temporal logic formula ϕ. In our logic, an atomic proposition will assert that the current probability of a node falls in the interval $\textit{I}$ from $\textit{I}.$ If $\textit{L}$ is an ω-regular language, one can hope to solve our model-checking problem (whether $\textit{L}$ ⊧ ϕ?) using standard techniques. However, we show that, in general, this is not the case. Consequently, we develop the notion of an ε-approximation, based on the transient and long-term behaviors of the Markov chain $\textit{M}.$ Briefly, the symbolic trajectory ξ' is an ε-approximation of the symbolic trajectory ξ iff (1) ξ' agrees with ξ during its transient phase; and (2) both ξ and ξ' are within an ε-neighborhood at all times after the transient phase. Our main results are that one can effectively check whether (i) for each infinite word in $\textit{L},$ at least one of its ε-approximations satisfies the given specification; (ii) for each infinite word in $\textit{L},$ all its ε-approximations satisfy the specification. These verification results are strong in that they apply to all finite state Markov chains.
ISSN 00045411
Age Range 18 to 22 years ♦ above 22 year
Educational Use Research
Education Level UG and PG
Learning Resource Type Article
Publisher Date 2015-03-02
Publisher Place New York
e-ISSN 1557735X
Journal Journal of the ACM (JACM)
Volume Number 62
Issue Number 1
Page Count 34
Starting Page 1
Ending Page 34


Open content in new tab

   Open content in new tab
Source: ACM Digital Library