1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379
|
\section{Synchronizing multiple levels of timing}
\label{sec-temp}
The simple approach just described in Section~\ref{sec-verfconcl}
for relating
different granularities of time is not very general because
it depends on the implementation of programming level operation
by fixed microinstruction sequences whose pre-determined lengths
are specified by the definition of \verb"MicroCycles".
In general, programming level operations may not
necessarily be implemented at lower levels by fixed sequences.
An example of when
it is not possible to define a simple function like \verb"MicroCycles"
to compute the exact length of each instruction cycle
is the operation of \Tamarack\ in fully asynchronous mode
when delayed handshaking signals
may cause microinstructions to be repeated.
Although there are ways
to partially circumvent this problem
using an oracle function
to `guess' the length of each instruction cycle,
the result may not be completely
general\footnote{
If all functions must be total,
then use of an oracle function to guess the length of each
instruction cycle implies that every instruction cycle
is finite, in particular,
that handshaking sequences always run to completion.
A completely general approach would
not imply that every handshaking sequence terminates,
but instead, require (as we do) that this fact be established by
formal proof.}
or correspond very well to an intuitive view of this
timing relationship.
The term {\it temporal abstraction}
(or {\it temporal projection})
has been used
in verification circles
\cite{Eveking,Dhingra:thesis,Hale:thesis,Herbert:thesis,Melham:calgary86,Moszkowski:thesis,Subra:calgary86}
to denote the idea of relating
different rates of computation in a formal specification.
This idea naturally arises from the more general concept
of viewing computer hardware as a hierarchy of interpretation levels
where a timing relationship between two levels of interpretation
is sometimes called a {\it synchronization scheme}
(or {\it synchronization mechanism}).
The control part of
each level in the interpretation hierarchy can be
viewed as a virtual machine with the generalized form shown
in Figure~\ref{fig-next}.
To paraphrase the description by Anceau in~\cite{Anceau}:
\begin{quotation}
A given level is only allowed to generate a new command
once it has received an end-of-execution signal for the present
command from the levels below.
In turn, it sends a similar signal to the control part
immediately above requesting the next instruction.
The end-of-instruction signal received by the lowest level
of synchronous logic is simply the clock signal of the machine.
The next-instruction signal issued by the microprogramming
level implies the fetching of the next instruction from memory.
\end{quotation}
\begin{figure}
\begin{center}
\input{fig-next}
\caption{Control Part of an Interpreter.}
\label{fig-next}
\end{center}
\end{figure}
In this section,
we describe a formalization which corresponds directly
to the imagery of
Figure~\ref{fig-next}
where a next-instruction signal at the microprogramming
level implies the fetching of the next programming level instruction
from memory.
At this level,
the next-instruction signal is not a physical signal,
but rather, a time-dependent condition on the internal state of
the machine.
In fully synchronous mode,
the end of the instruction cycle occurs when the
value of the microprocessor program counter returns to zero.
Hence, the next-instruction signal generated by the microprogramming
level is the time-dependent condition expressed by the term,
\begin{quote}
\verb"((val4 rep) o mpc) Eq 0"
\end{quote}
\noindent
where \verb"Eq" is an infix temporal operator for relating
a signal to a particular value in terms of equality.
\begintt
let Eq = new_infix_definition (`Eq`,"$Eq (P:time->*) c = \(\lambda\)t. P t = c");;
\endtt
If an instruction cycle begins at time \verb"t",
then it terminates at time \verb"t+m" if and only if
\verb"t+m" is the first time that the microcode program
counter \verb"mpc" becomes equal to zero since time \verb"t".
This condition is expressed by the term,
\begin{quote}
\verb"Next (((val4 rep) o mpc) Eq 0) (t,t+m)"
\end{quote}
\noindent
where the predicate \verb"Next" is defined as:
\begintt
let Next = new_definition (
`Next`,
"Next g (t1,t2) = t1 < t2 \(\wedge\) (\(\forall\)t. t1 < t \(\wedge\) t < t2 ==> \(\neg\)(g t)) \(\wedge\) (g t2)");;
\endtt
It is possible to show that this condition holds for
each of the fixed microinstruction sequences
that implement programming level operations in fully synchronous mode.
In place of the predicate \verb"SynCorrectness1",
a slightly different form of parameterized correctness condition
is used to verify
the implementation of each programming level operation.
\begintt
let SynCorrectness2 = new_definition (
`SynCorrectness2`,
"SynCorrectness2 (rep:^rep_ty) (ircond,opcval,iszerocond) =
\(\forall\)ireq mpc mar pc acc ir rtn arg buf iack mem.
Val3_CASES_ASM (rep:^rep_ty) \(\wedge\)
Val4Word4_ASM rep \(\wedge\)
SynSystem rep (ireq,mpc,mar,pc,acc,ir,rtn,arg,buf,iack,mem)
==>
\(\forall\)t:time.
(((val4 rep) o mpc) t = 0) \(\wedge\)
((ireq t \(\wedge\) \(\neg\)(iack t)) = ircond) \(\wedge\)
(OpcVal rep (mem t,pc t) = opcval) \(\wedge\)
((iszero rep) (acc t) = iszerocond)
==>
\(\exists\)m.
Next (((val4 rep) o mpc) Eq 0) (t,t+m) \(\wedge\)
((mem (t+m),pc (t+m),acc (t+m),rtn (t+m),iack (t+m)) =
NextState rep (ireq t,mem t,pc t,acc t,rtn t,iack t))");;
\endtt
In this version of the correctness condition,
the length of the instruction cycle
is an existentially quantified
variable instead of a value computed by \verb"MicroCycles".
The proofs of correctness for each programming level operation
are almost the same as before
except that we must keep track of the
microcode program counter \verb"mpc" throughout the instruction cycle
and show that it
is never equal to zero before the end of the instruction cycle.
Combining individual correctness results for each programming level
operation and eliminating, as before, the overhead of a parameterized
correctness condition yields the following theorem.
\begintt
|- Val3_CASES_ASM (rep:^rep_ty) \(\wedge\)
Val4Word4_ASM rep \(\wedge\)
SynSystem rep (ireq,mpc,mar,pc,acc,ir,rtn,arg,buf,iack,mem)
==>
\(\forall\)t.
(((val4 rep) o mpc) t = 0)
==>
\(\exists\)m.
Next (((val4 rep) o mpc) Eq 0) (t,t+m) \(\wedge\)
((mem (t+m),pc (t+m),acc (t+m),rtn (t+m),iack (t+m)) =
NextState rep (ireq t,mem t,pc t,acc t,rtn t,iack t))
\endtt
\begin{figure}
\begin{center}
\input{fig-temp}
\caption{Abstract and Concrete Time Scales.}
\label{fig-temp}
\end{center}
\end{figure}
The next step is to define a general purpose function \verb"TimeOf"
which uses an next-instruction signal \verb"g" to construct
a mapping \verb"f" from an abstract time scale to a concrete
time scale.
The relationship between abstract and concrete time is shown
in Figure~\ref{fig-temp}.
The following definition of \verb"TimeOf" is
similar to definitions given in previous work by
Dhingra \cite{Dhingra:thesis},
Herbert \cite{Herbert:thesis}
and Melham \cite{Melham:calgary86}
(the name `\verb"TimeOf"'
is used here for continuity with previous work).
It is defined
in terms of the predicates \verb"First"
and \verb"Next" using primitive recursion and Hilbert's
$\varepsilon$-operator.
\footnote{
Our definition of \verb"TimeOf"
is simplified by avoiding the existential quantification found
in corresponding definitions given by
\cite{Dhingra:thesis,Herbert:thesis,Melham:calgary86}.
When the constraint that \verb"g" is `true infinitely often' holds,
this function yields exactly the same mapping from abstract time
to concrete time. When this constraint does not hold,
both versions (with and without existential quantification)
yield an incomplete mapping.}
\begintt
let First = new_definition (
`First`,
"First g t = (\(\forall\)p. p < t ==> \(\neg\)(g p)) \(\wedge\) (g t)");;
let TimeOf = new_prim_rec_definition (
`TimeOf`,
"(TimeOf g 0 = \(\varepsilon\)t. First g t) \(\wedge\)
(TimeOf g (SUC u) = \(\varepsilon\)t. Next g (TimeOf g u,t))");;
\endtt
The first point on the abstract time scale corresponds to the first time
that the next-instruction signal \verb"g"
is true with respect to the concrete time scale.
Subsequent points on the abstract time scale are defined recursively.
The next point after \verb"u" on the abstract time scale,
i.e., \verb"u+1",
corresponds to the next time that the next-instruction signal \verb"g"
becomes true with respect to the concrete time scale.
Applying \verb"TimeOf" to the
next-instruction signal generated at microprogramming level of \Tamarack,
\begin{quote}
\verb"TimeOf (((val4 rep) o mpc) Eq 0)"
\end{quote}
\noindent
yields a mapping from
points on the abstract time scale to points on the
concrete time scale.
This mapping is the synchronization scheme which
relates behaviour at the abstract programming level time scale
to behaviour at the concrete microprogramming level time scale.
As with most definitions that use Hilbert's
$\varepsilon$-operator,
some additional proof infra-structure is needed to
reason about specifications involving \verb"TimeOf".
For instance,
it is necessary to establish the fact that the
next-instruction signal is true
infinitely often \cite{Joyce:tech100,Melham:calgary86}.
Fortunately,
the main result of
this proof infrastructure can be summed up in a compact theorem:
\begintt
|- \(\forall\)g r.
(\(\exists\)t. g t) \(\wedge\) (\(\forall\)t. g t ==> \(\exists\)m. Next g (t,t+m) \(\wedge\) r (t,t+m))
==>
\(\forall\)u. r (TimeOf g u,TimeOf g (u+1))
\endtt
For any next-instruction signal \verb"g" and next state relation \verb"r",
this theorem can be used to reduce the problem of establishing,
\begin{quote}
$\forall$\verb"u. r (TimeOf g u,TimeOf g (u+1))"
\end{quote}
\noindent
to a pair of simpler problems:
\begin{quote}
$\exists$\verb"t. g t"
$\forall$\verb"t. g t ==> "$\exists$\verb"m. Next g (t,t+m) "$\wedge$\verb" r (t,t+m)"
\end{quote}
The last step in verifying \Tamarack\
is essentially a question about the synchronization scheme
which relates behaviour at the programming level time scale
to behaviour at the microprogramming level time scale.
The above theorem provides a way to
translate this question
into a pair of simpler problems.
\footnote{
This is analogous to how
axioms and rules of Hoare logic
can be used to translate questions about program correctness
into verification conditions \cite{Gordon:banff87}.}
In particular, it is used mid-way through a
backward proof of the top-level correctness statement
to reduce the goal,
\begintt
OK..
"\(\forall\)u.
mem(TimeOf g(u + 1)),pc(TimeOf g(u + 1)),acc(TimeOf g(u + 1)),
rtn(TimeOf g(u + 1)),iack(TimeOf g(u + 1)) =
NextState
rep
(ireq(TimeOf g u),mem(TimeOf g u),pc(TimeOf g u),acc(TimeOf g u),
rtn(TimeOf g u),iack(TimeOf g u))"
[ "((val4 rep) o mpc) Eq 0 = g" ]
[ "Val3_CASES_ASM rep" ]
[ "Val4Word4_ASM rep" ]
[ "SynSystem rep(ireq,mpc,mar,pc,acc,ir,rtn,arg,buf,iack,mem)" ]
[ "\(\exists\)t. ((val4 rep) o mpc)t = 0" ]
() : void
\endtt
\noindent
to the following pair of sub-goals.
\footnote{
A generalized form of this backward proof step could be implemented
as a \HOL\ tactic in the same way that axioms and rules of Hoare Logic
can be `inverted' into a tactic \cite{Gordon:banff87}.}
\begintt
"\(\exists\)t. g t"
[ "((val4 rep) o mpc) Eq 0 = g" ]
[ "Val3_CASES_ASM rep" ]
[ "Val4Word4_ASM rep" ]
[ "SynSystem rep(ireq,mpc,mar,pc,acc,ir,rtn,arg,buf,iack,mem)" ]
[ "\(\exists\)t. ((val4 rep) o mpc)t = 0" ]
() : void
\endtt
\begintt
"\(\forall\)t.
g t ==>
(\(\exists\)m.
Next g(t,t+m) \(\wedge\)
(mem(t + m),pc(t + m),acc(t + m),rtn(t + m),iack(t + m) =
NextState rep(ireq t,mem t,pc t,acc t,rtn t,iack t)))"
[ "((val4 rep) o mpc) Eq 0 = g" ]
[ "Val3_CASES_ASM rep" ]
[ "Val4Word4_ASM rep" ]
[ "SynSystem rep(ireq,mpc,mar,pc,acc,ir,rtn,arg,buf,iack,mem)" ]
[ "\(\exists\)t. ((val4 rep) o mpc)t = 0" ]
() : void
\endtt
The first sub-goal is satisfied by assumptions in the assumption list
combined with the definition of \verb"Eq".
The second sub-goal is satisfied by
previously mentioned correctness results for
the microprogramming level.
The satisfaction of these two sub-goals yields
the following correctness theorem for \Tamarack.
\begintt
|- Val3_CASES_ASM (rep:^rep_ty) \(\wedge\)
Val4Word4_ASM rep \(\wedge\)
SynSystem rep (ireq,mpc,mar,pc,acc,ir,rtn,arg,buf,iack,mem) \(\wedge\)
(\(\exists\)t. ((val4 rep) o mpc) t = 0)
==>
let f = TimeOf (((val4 rep) o mpc) Eq 0) in
TamarackBeh rep (ireq o f,mem o f,pc o f,acc o f,rtn o f,iack o f)
\endtt
This is nearly
identical to the previous version of the top level correctness
statement in Section~\ref{sec-verfconcl}
except for the sub-term which specifies the relationship
between the two time scales.
|