Session TESL_Language

Theory Introduction

chapter‹A Gentle Introduction to TESL›

(*<*)
theory Introduction
  imports Main
begin

(*>*)

section ‹Context›
text‹
The design of complex systems involves different formalisms for modeling their different parts or 
aspects. The global model of a system may therefore consist of a coordination of concurrent 
sub-models that use different paradigms such as differential equations, state machines, 
synchronous data-flow networks, 
discrete event models and so on, as illustrated in \autoref{fig:het-timed-system}. 
This raises the interest in architectural composition languages 
that allow for ``bolting the respective sub-models together'', along their various interfaces, and 
specifying the various ways of collaboration and coordination~@{cite "nguyenvan:hal-01583815"}.

We are interested in languages that allow for specifying the timed coordination of subsystems by 
addressing the following conceptual issues:
 events may occur in different sub-systems at unrelated times, leading to ‹polychronous› systems, 
  which do not necessarily have a common base clock,
 the behavior of the sub-systems is observed only at a series of discrete instants, and time 
  coordination has to take this ‹discretization› into account,
 the instants at which a system is observed may be arbitrary and should not change its behavior 
  (‹stuttering invariance›),
 coordination between subsystems involves causality, so the occurrence of an event may enforce 
  the occurrence of other events, possibly after a certain duration has elapsed or an event has 
  occurred a given number of times,
 the domain of time (discrete, rational, continuous. . . ) may be different in the subsystems, 
  leading to ‹polytimed› systems,
 the time frames of different sub-systems may be related (for instance, time in a GPS satellite 
  and in a GPS receiver on Earth are related although they are not the same).
›

text‹
\begin{figure}[htbp]
 \centering
 \includegraphics{glue.pdf}
 \caption{A Heterogeneous Timed System Model}
 \label{fig:het-timed-system}
\end{figure}›

(*<*)
― ‹Constants and notation to be able to write what we want as Isabelle terms, not as LaTeX maths›
consts dummyInfty    :: 'a  'a
consts dummyTESLSTAR :: 'a
consts dummyFUN      :: 'a set  'b set  'c set›
consts dummyCLOCK    :: 'a set›
consts dummyBOOL     :: ‹bool set› 
consts dummyTIMES    :: 'a set› 
consts dummyLEQ      :: 'a  'a  bool›

notation dummyInfty    ((_) [1000] 999)
notation dummyTESLSTAR (TESL*)
notation dummyFUN      (infixl  100)
notation dummyCLOCK    (𝒦) 
notation dummyBOOL     (𝔹) 
notation dummyTIMES    (𝒯) 
notation dummyLEQ      (infixl 𝒯 100)
(*>*)

text‹
In order to tackle the heterogeneous nature of the subsystems, we abstract their behavior as clocks. 
Each clock models an event, i.e., something that can occur or not at a given time. This time is measured 
in a time frame associated with each clock, and the nature of time (integer, rational, real, or any 
type with a linear order) is specific to each clock. 
When the event associated with a clock occurs, the clock ticks. In order to support any kind of 
behavior for the subsystems, we are only interested in specifying what we can observe at a series 
of discrete instants. There are two constraints on observations: a clock may tick only at an 
observation instant, and the time on any clock cannot decrease from an instant to the next one. 
However, it is always possible to add arbitrary observation instants, which allows for stuttering 
and modular composition of systems. 
As a consequence, the key concept of our setting is the notion of a clock-indexed Kripke model: 
@{term Σ =   𝒦  (𝔹 × 𝒯)}, where @{term 𝒦} is an enumerable set of clocks, @{term 𝔹} 
is the set of booleans – used to  indicate that a clock ticks at a given instant – and @{term 𝒯} 
is a universal metric time space for which we only assume that it is large enough to contain all 
individual time spaces of clocks and that it is ordered by some linear ordering @{term (≤𝒯)}.
›

text‹
  The elements of @{term Σ} are called runs. A specification language is a set of 
  operators that constrains the set of possible monotonic runs. Specifications are composed by 
  intersecting the denoted run sets of constraint operators.
  Consequently, such specification languages do not limit the number of clocks used to model a 
  system (as long as it is finite) and it is always possible to add clocks to a specification. 
  Moreover, they are ‹compositional› by construction since the composition of specifications 
  consists of the conjunction of their constraints.
›

text‹
  This work provides the following contributions:
   defining the non-trivial language @{term TESL*} in terms of clock-indexed Kripke models, 
   proving that this denotational semantics is stuttering invariant,
   defining an adapted form of symbolic primitives and presenting the set of operational 
    semantic rules,
   presenting formal proofs for soundness, completeness, and progress of the latter.
›

section‹The TESL Language›
   
text‹
  The TESL language @{cite "BouJacHarPro2014MEMOCODE"} was initially designed to coordinate the
  execution of heterogeneous components during the simulation of a system. We define here a minimal
  kernel of operators that will form the basis of a family of specification languages, including the
  original TESL language, which is described at \url{http://wdi.supelec.fr/software/TESL/}.
›  

subsection‹Instantaneous Causal Operators›
text‹
  TESL has operators to deal with instantaneous causality, i.e., to react to an event occurrence
  in the very same observation instant.
   ‹c1 implies c2› means that at any instant where ‹c1› ticks, ‹c2› has to tick too.
   ‹c1 implies not c2› means that at any instant where ‹c1› ticks, ‹c2› cannot tick.
   ‹c1 kills c2› means that at any instant where ‹c1› ticks, and at any future instant, 
    ‹c2› cannot tick.
›

subsection‹Temporal Operators›
text‹
  TESL also has chronometric temporal operators that deal with dates and chronometric delays.
   ‹c sporadic t› means that clock ‹c› must have a tick at time ‹t› on its own time scale.
   ‹c1 sporadic t on c2› means that clock ‹c1› must have a tick at an instant where the time 
    on ‹c2› is ‹t›.
   ‹c1 time delayed by d on m implies c2› means that every time clock ‹c1› ticks, ‹c2› must have 
    a tick at the first instant where the time on ‹m› is ‹d› later than it was when ‹c1› had ticked.
    This means that every tick on ‹c1› is followed by a tick on ‹c2› after a delay ‹d› measured
    on the time scale of clock ‹m›.
   ‹time relation (c1, c2) in R› means that at every instant, the current time on clocks ‹c1›
    and ‹c2› must be in relation ‹R›. By default, the time lines of different clocks are 
    independent. This operator allows us to link two time lines, for instance to model the fact
    that time in a GPS satellite and time in a GPS receiver on Earth are not the same but are 
    related. Time being polymorphic in TESL, this can also be used to model the fact that the
    angular position on the camshaft of an engine moves twice as fast as the angular position 
    on the crankshaft~‹See \url{http://wdi.supelec.fr/software/TESL/GalleryEngine} for more details›. 
    We may consider only linear arithmetic relations to restrict the problem to a domain where 
    the resolution is decidable.›

subsection‹Asynchronous Operators›
text‹
  The last category of TESL operators allows the specification of asynchronous relations between
  event occurrences. They do not specify the precise instants at which ticks have to occur, 
  they only put bounds on the set of instants at which they should occur.
   ‹c1 weakly precedes c2› means that for each tick on ‹c2›, there must be at least one tick
    on ‹c1› at a previous or at the same instant. This can also be expressed by stating
    that at each instant, the number of ticks since the beginning of the run must be lower or 
    equal on ‹c2› than on ‹c1›.
   ‹c1 strictly precedes c2› means that for each tick on ‹c2›, there must be at least one tick
    on ‹c1› at a previous instant. This can also be expressed by saying that at each instant, 
    the number of ticks on ‹c2› from the beginning of the run to this instant, must be lower or 
    equal to the number of ticks on ‹c1› from the beginning of the run to the previous instant.
›


(*<*)
no_notation dummyInfty      ((_) )
no_notation dummyTESLSTAR   (TESL*)
no_notation dummyFUN        (infixl  100)
no_notation dummyCLOCK      (𝒦) 
no_notation dummyBOOL       (𝔹) 
no_notation dummyTIMES      (𝒯) 
no_notation dummyLEQ        (infixl 𝒯 100)


end
(*>*)

Theory TESL

(*chapter‹The Core of the TESL Language: Syntax and Basics›*)
text‹\chapter[Core TESL: Syntax and Basics]{The Core of the TESL Language: Syntax and Basics}›

theory TESL
imports Main

begin

section‹Syntactic Representation›
text‹
  We define here the syntax of TESL specifications.
›

subsection‹Basic elements of a specification›
text‹
  The following items appear in specifications:
   Clocks, which are identified by a name.
   Tag constants are just constants of a type which denotes the metric time space.
›

datatype     clock         = Clk ‹string›
type_synonym instant_index = ‹nat›

datatype      tag_const =  TConst   (the_tag_const : )         (τcst)


subsection‹Operators for the TESL language›
text‹
  The type of atomic TESL constraints, which can be combined to form specifications.
›
datatype  TESL_atomic =
    SporadicOn       ‹clock›  tag_const›  ‹clock›  (‹_ sporadic _ on _› 55)
  | TagRelation      ‹clock› ‹clock› ( tag_const ×  tag_const)  bool› 
                                                      (time-relation _, _  _› 55)
  | Implies          ‹clock› ‹clock›                  (infixr implies 55)
  | ImpliesNot       ‹clock› ‹clock›                  (infixr implies not 55)
  | TimeDelayedBy    ‹clock›  tag_const› ‹clock› ‹clock› 
                                                      (‹_ time-delayed by _ on _ implies _› 55)
  | WeaklyPrecedes   ‹clock› ‹clock›                  (infixr weakly precedes 55)
  | StrictlyPrecedes ‹clock› ‹clock›                  (infixr strictly precedes 55)
  | Kills            ‹clock› ‹clock›                  (infixr kills 55)

text‹
  A TESL formula is just a list of atomic constraints, with implicit conjunction
  for the semantics.
›
type_synonym  TESL_formula =  TESL_atomic list›

text‹
  We call ‹positive atoms› the atomic constraints that create ticks from nothing.
  Only sporadic constraints are positive in the current version of TESL.
›
fun positive_atom ::  TESL_atomic  bool› where
    positive_atom (_ sporadic _ on _) = True›
  | positive_atom _                   = False›

text‹
  The @{term NoSporadic} function removes sporadic constraints from a TESL formula.
›
abbreviation NoSporadic ::  TESL_formula   TESL_formula›
where 
  NoSporadic f  (List.filter (λfatom. case fatom of
      _ sporadic _ on _  False
    | _  True) f)

subsection‹Field Structure of the Metric Time Space›
text‹
  In order to handle tag relations and delays, tags must belong to a field.
  We show here that this is the case when the type parameter of @{typ  tag_const›} 
  is itself a field.
›
instantiation tag_const ::(field)field
begin
  fun inverse_tag_const
  where ‹inverse (τcst t) = τcst (inverse t)

  fun divide_tag_const 
    where ‹divide (τcst t1) (τcst t2) = τcst (divide t1 t2)

  fun uminus_tag_const
    where ‹uminus (τcst t) = τcst (uminus t)

fun minus_tag_const
  where ‹minus (τcst t1) (τcst t2) = τcst (minus t1 t2)

definition one_tag_const  τcst 1

fun times_tag_const
  where ‹times (τcst t1) (τcst t2) = τcst (times t1 t2)

definition zero_tag_const  τcst 0

fun plus_tag_const
  where ‹plus (τcst t1) (τcst t2) = τcst (plus t1 t2)

instance proof
  text‹Multiplication is associative.›
  fix a::::field tag_const› and b::::field tag_const›
                               and c::::field tag_const›
  obtain u v w where a = τcst u and b = τcst v and c = τcst w
    using tag_const.exhaust by metis
  thus a * b * c = a * (b * c)
    by (simp add: TESL.times_tag_const.simps)
next
  text‹Multiplication is commutative.›
  fix a::::field tag_const› and b::::field tag_const›
  obtain u v where a = τcst u and b = τcst v using tag_const.exhaust by metis
  thus a * b = b * a
    by (simp add: TESL.times_tag_const.simps)
next
  text‹One is neutral for multiplication.›
  fix a::::field tag_const›
  obtain u where a = τcst u using tag_const.exhaust by blast
  thus 1 * a = a
    by (simp add: TESL.times_tag_const.simps one_tag_const_def)
next
  text‹Addition is associative.›
  fix a::::field tag_const› and b::::field tag_const›
                               and c::::field tag_const›
  obtain u v w where a = τcst u and b = τcst v and c = τcst w
    using tag_const.exhaust by metis
  thus a + b + c = a + (b + c)
    by (simp add: TESL.plus_tag_const.simps)
next
  text‹Addition is commutative.›
  fix a::::field tag_const› and b::::field tag_const›
  obtain u v where a = τcst u and b = τcst v using tag_const.exhaust by metis
  thus a + b = b + a
    by (simp add: TESL.plus_tag_const.simps)
next
  text‹Zero is neutral for addition.›
  fix a::::field tag_const›
  obtain u where a = τcst u using tag_const.exhaust by blast
  thus 0 + a = a
    by (simp add: TESL.plus_tag_const.simps zero_tag_const_def)
next
  text‹The sum of an element and its opposite is zero.›
  fix a::::field tag_const›
  obtain u where a = τcst u using tag_const.exhaust by blast
  thus -a + a = 0
    by (simp add: TESL.plus_tag_const.simps
                  TESL.uminus_tag_const.simps
                  zero_tag_const_def)
next
  text‹Subtraction is adding the opposite.›
  fix a::::field tag_const› and b::::field tag_const›
  obtain u v where a = τcst u and b = τcst v using tag_const.exhaust by metis
  thus a - b = a + -b
    by (simp add: TESL.minus_tag_const.simps
                  TESL.plus_tag_const.simps
                  TESL.uminus_tag_const.simps)
next
  text‹Distributive property of multiplication over addition.›
  fix a::::field tag_const› and b::::field tag_const›
                               and c::::field tag_const›
  obtain u v w where a = τcst u and b = τcst v and c = τcst w
    using tag_const.exhaust by metis
  thus (a + b) * c = a * c + b * c
    by (simp add: TESL.plus_tag_const.simps
                  TESL.times_tag_const.simps
                  ring_class.ring_distribs(2))
next
  text‹The neutral elements are distinct.›
  show (0::(::field tag_const))  1
    by (simp add: one_tag_const_def zero_tag_const_def)
next
  text‹The product of an element and its inverse is 1.›
  fix a::::field tag_const› assume h:a  0
  obtain u where a = τcst u using tag_const.exhaust by blast
  moreover with h have u  0 by (simp add: zero_tag_const_def)
  ultimately show ‹inverse a * a = 1
    by (simp add: TESL.inverse_tag_const.simps
                  TESL.times_tag_const.simps
                  one_tag_const_def)
next
  text‹Dividing is multiplying by the inverse.›
  fix a::::field tag_const› and b::::field tag_const›
  obtain u v where a = τcst u and b = τcst v using tag_const.exhaust by metis
  thus a div b = a * inverse b
    by (simp add: TESL.divide_tag_const.simps
                  TESL.inverse_tag_const.simps
                  TESL.times_tag_const.simps
                  divide_inverse)
next
  text‹Zero is its own inverse.›
  show ‹inverse (0::(::field tag_const)) = 0
    by (simp add: TESL.inverse_tag_const.simps zero_tag_const_def)
qed

end

text‹
  For comparing dates (which are represented by tags) on clocks, we need an order on tags.
›

instantiation tag_const :: (order)order
begin
  inductive less_eq_tag_const :: 'a tag_const  'a tag_const  bool›
  where
    Int_less_eq[simp]:      n  m  (TConst n)  (TConst m)

  definition less_tag: (x::'a tag_const) < y  (x  y)  (x  y)

  instance proof
    show x y :: 'a tag_const. (x < y) = (x  y  ¬ y  x)
      using less_eq_tag_const.simps less_tag by auto
  next
    fix x::'a tag_const›
    from tag_const.exhaust obtain x0::'a where x = TConst x0 by blast
    with Int_less_eq show x  x by simp
  next
    show x y z  :: 'a tag_const. x  y  y  z  x  z
      using less_eq_tag_const.simps by auto
  next
    show x y  :: 'a tag_const. x  y  y  x  x = y
      using less_eq_tag_const.simps by auto
  qed

end

text‹
  For ensuring that time does never flow backwards, we need a total order on tags.
›
instantiation tag_const :: (linorder)linorder
begin
  instance proof
    fix x::'a tag_const› and y::'a tag_const›
    from tag_const.exhaust obtain x0::'a where x = TConst x0 by blast
    moreover from tag_const.exhaust obtain y0::'a where y = TConst y0 by blast
    ultimately show x  y  y  x using less_eq_tag_const.simps by fastforce
  qed

end

end

Theory Run

section ‹ Defining Runs ›

theory Run
imports TESL
      
begin
text ‹
  Runs are sequences of instants, and each instant maps a clock to a pair 
  @{term (h, t)} where @{term h} indicates whether the clock ticks or not, 
  and @{term t} is the current time on this clock.
  The first element of the pair is called the ‹hamlet› of the clock (to tick or 
  not to tick), the second element is called the ‹time›.
›

abbreviation hamlet where hamlet  fst›
abbreviation time   where time  snd›

type_synonym  instant = ‹clock  (bool ×  tag_const)

text‹
  Runs have the additional constraint that time cannot go backwards on any clock
  in the sequence of instants.
  Therefore, for any clock, the time projection of a run is monotonous.
›
typedef (overloaded) ::linordered_field run =
  { ρ::nat   instant. c. mono (λn. time (ρ n c)) }
proof
  show (λ_ _. (True, τcst 0))  {ρ. c. mono (λn. time (ρ n c))} 
    unfolding mono_def by blast
qed

lemma Abs_run_inverse_rewrite:
  c. mono (λn. time (ρ n c))  Rep_run (Abs_run ρ) = ρ
by (simp add: Abs_run_inverse)

text ‹
  A ‹dense› run is a run in which something happens (at least one clock ticks) 
  at every instant.
›
definition dense_run ρ  (n. c. hamlet ((Rep_run ρ) n c))

text@{term run_tick_count ρ K n} counts the number of ticks on clock @{term K} 
  in the interval ‹[0, n]› of run @{term ρ}.
›
fun run_tick_count :: (::linordered_field) run  clock  nat  nat›
  (# _ _ _›)
where
  (# ρ K 0)       = (if hamlet ((Rep_run ρ) 0 K)
                       then 1
                       else 0)
| (# ρ K (Suc n)) = (if hamlet ((Rep_run ρ) (Suc n) K)
                       then 1 + (# ρ K n)
                       else (# ρ K n))

text@{term run_tick_count_strictly ρ K n} counts the number of ticks on
  clock @{term K} in the interval ‹[0, n[› of run @{term ρ}.
›
fun run_tick_count_strictly :: (::linordered_field) run  clock  nat  nat›
  (#< _ _ _›)
where
  (#< ρ K 0)       = 0
| (#< ρ K (Suc n)) = # ρ K n

text@{term first_time ρ K n τ} tells whether instant @{term n} in run @{termρ}
  is the first one where the time on clock @{term K} reaches @{term τ}.
›
definition first_time :: 'a::linordered_field run  clock  nat  'a tag_const
                           bool›
where
  first_time ρ K n τ  (time ((Rep_run ρ) n K) = τ)
                       (n'. n' < n  time ((Rep_run ρ) n' K) = τ)

text‹
  The time on a clock is necessarily less than @{term τ} before the first instant
  at which it reaches @{term τ}.
›
lemma before_first_time:
  assumes ‹first_time ρ K n τ
      and m < n
    shows ‹time ((Rep_run ρ) m K) < τ
proof -
  have ‹mono (λn. time (Rep_run ρ n K)) using Rep_run by blast
  moreover from assms(2) have m  n using less_imp_le by simp
  moreover have ‹mono (λn. time (Rep_run ρ n K)) using Rep_run by blast
  ultimately have  ‹time ((Rep_run ρ) m K)  time ((Rep_run ρ) n K)
    by (simp add:mono_def)
  moreover from assms(1) have ‹time ((Rep_run ρ) n K) = τ
    using first_time_def by blast
  moreover from assms have ‹time ((Rep_run ρ) m K)  τ
    using first_time_def by blast
  ultimately show ?thesis by simp
qed

text‹
  This leads to an alternate definition of @{term ‹first_time›}:
›
lemma alt_first_time_def:
  assumes m < n. time ((Rep_run ρ) m K) < τ
      and ‹time ((Rep_run ρ) n K) = τ
    shows ‹first_time ρ K n τ
proof -
  from assms(1) have m < n. time ((Rep_run ρ) m K)  τ
    by (simp add: less_le)
  with assms(2) show ?thesis by (simp add: first_time_def)
qed

end

Theory Denotational

chapter ‹Denotational Semantics›

theory Denotational
imports
    TESL
    Run

begin
text‹
  The denotational semantics maps TESL formulae to sets of satisfying runs.
  Firstly, we define the semantics of atomic formulae (basic constructs of the 
  TESL language), then we define the semantics of compound formulae as the
  intersection of the semantics of their components: a run must satisfy all
  the individual formulae of a compound formula.
›
section ‹Denotational interpretation for atomic TESL formulae›
(*<*)
consts dummyT0 :: tag_const›
consts dummyDeltaT :: tag_const›

notation dummyT0     (t0)
notation dummyDeltaT (δt)
(*>*)

fun TESL_interpretation_atomic
    :: (::linordered_field) TESL_atomic   run set› ( _ TESL)
where
  ― ‹@{term K1 sporadic τ on K2} means that @{term K1} should tick at an 
      instant where the time on @{term K2} is @{term τ}.›
     K1 sporadic τ on K2 TESL =
        {ρ. n::nat. hamlet ((Rep_run ρ) n K1)  time ((Rep_run ρ) n K2) = τ}
  ― ‹@{term time-relation K1, K2  R} means that at each instant, the time 
      on @{term K1} and the time on @{term K2} are in relation~@{term R}.›
  |  time-relation K1, K2  R TESL =
        {ρ. n::nat. R (time ((Rep_run ρ) n K1), time ((Rep_run ρ) n K2))}
  ― ‹@{term master implies slave} means that at each instant at which 
      @{term master} ticks, @{term slave} also ticks.›
  |  master implies slave TESL =
        {ρ. n::nat. hamlet ((Rep_run ρ) n master)  hamlet ((Rep_run ρ) n slave)}
  ― ‹@{term master implies not slave} means that at each instant at which 
      @{term master} ticks, @{term slave} does not tick.›
  |  master implies not slave TESL =
        {ρ. n::nat. hamlet ((Rep_run ρ) n master)  ¬hamlet ((Rep_run ρ) n slave)}
  ― ‹@{term master time-delayed by δτ on measuring implies slave} means that 
      at each instant at which  @{term master} ticks, @{term slave} will
      tick after a delay @{term δτ} measured on the time scale 
      of @{term measuring}.›
  |  master time-delayed by δτ on measuring implies slave TESL =
    ― ‹
      When master ticks, let's call @{term t0} the current date on measuring. Then, 
      at the first instant when the date on measuring is @{term t0+δt}, 
      slave has to tick.
    ›
        {ρ. n. hamlet ((Rep_run ρ) n master) 
                 (let measured_time = time ((Rep_run ρ) n measuring) in
                  m  n.  first_time ρ measuring m (measured_time + δτ)
                             hamlet ((Rep_run ρ) m slave)
                 )
        }
  ― ‹@{term K1 weakly precedes K2} means that each tick on @{term K2}
        must be preceded by or coincide with at least one tick on @{term K1}.
        Therefore, at each instant @{term n}, the number of ticks on @{term K2} 
        must be less or equal to the number of ticks on @{term K1}.›
  |  K1 weakly precedes K2 TESL =
        {ρ. n::nat. (run_tick_count ρ K2 n)  (run_tick_count ρ K1 n)}
  ― ‹@{term K1 strictly precedes K2} means that each tick on @{term K2}
        must be preceded by at least one tick on @{term K1} at a previous instant.
        Therefore, at each instant n, the number of ticks on @{term K2}
        must be less or equal to the number of ticks on @{term K1} 
        at instant n - 1.›
  |  K1 strictly precedes K2 TESL =
        {ρ. n::nat. (run_tick_count ρ K2 n)  (run_tick_count_strictly ρ K1 n)}
  ― ‹@{term K1 kills K2} means that when @{term K1} ticks, @{term K2}
        cannot tick and is not allowed to tick at any further instant.›
  |  K1 kills K2 TESL =
        {ρ. n::nat. hamlet ((Rep_run ρ) n K1)
                         (mn. ¬ hamlet ((Rep_run ρ) m K2))}

section ‹Denotational interpretation for TESL formulae›

text‹
  To satisfy a formula, a run has to satisfy the conjunction of its atomic 
  formulae. Therefore, the interpretation of a formula is the intersection
  of the interpretations of its components.
›
fun TESL_interpretation :: (::linordered_field) TESL_formula   run set›
  (⟦⟦ _ ⟧⟧TESL)
where
  ⟦⟦ [] ⟧⟧TESL = {_. True}
| ⟦⟦ φ # Φ ⟧⟧TESL =  φ TESL  ⟦⟦ Φ ⟧⟧TESL

lemma TESL_interpretation_homo:
   φ TESL  ⟦⟦ Φ ⟧⟧TESL = ⟦⟦ φ # Φ ⟧⟧TESL
by simp

subsection ‹Image interpretation lemma›

theorem TESL_interpretation_image:
  ⟦⟦ Φ ⟧⟧TESL =  ((λφ.  φ TESL) ` set Φ)
by (induction Φ, simp+)

subsection ‹Expansion law›
text ‹Similar to the expansion laws of lattices.›

theorem TESL_interp_homo_append:
  ⟦⟦ Φ1 @ Φ2 ⟧⟧TESL = ⟦⟦ Φ1 ⟧⟧TESL  ⟦⟦ Φ2 ⟧⟧TESL
by (induction Φ1, simp, auto)

section ‹Equational laws for the denotation of TESL formulae›

lemma TESL_interp_assoc:
  ⟦⟦ (Φ1 @ Φ2) @ Φ3 ⟧⟧TESL = ⟦⟦ Φ1 @ (Φ2 @ Φ3) ⟧⟧TESL
by auto

lemma TESL_interp_commute:
  shows ⟦⟦ Φ1 @ Φ2 ⟧⟧TESL = ⟦⟦ Φ2 @ Φ1 ⟧⟧TESL
by (simp add: TESL_interp_homo_append inf_sup_aci(1))

lemma TESL_interp_left_commute:
  ⟦⟦ Φ1 @ (Φ2 @ Φ3) ⟧⟧TESL = ⟦⟦ Φ2 @ (Φ1 @ Φ3) ⟧⟧TESL
unfolding TESL_interp_homo_append by auto

lemma TESL_interp_idem:
  ⟦⟦ Φ @ Φ ⟧⟧TESL = ⟦⟦ Φ ⟧⟧TESL
using TESL_interp_homo_append by auto

lemma TESL_interp_left_idem:
  ⟦⟦ Φ1 @ (Φ1 @ Φ2) ⟧⟧TESL = ⟦⟦ Φ1 @ Φ2 ⟧⟧TESL
using TESL_interp_homo_append by auto

lemma TESL_interp_right_idem:
  ⟦⟦ (Φ1 @ Φ2) @ Φ2 ⟧⟧TESL = ⟦⟦ Φ1 @ Φ2 ⟧⟧TESL
unfolding TESL_interp_homo_append by auto

lemmas TESL_interp_aci = TESL_interp_commute
                         TESL_interp_assoc
                         TESL_interp_left_commute
                         TESL_interp_left_idem

text ‹The empty formula is the identity element.›
lemma TESL_interp_neutral1:
  ⟦⟦ [] @ Φ ⟧⟧TESL = ⟦⟦ Φ ⟧⟧TESL
by simp

lemma TESL_interp_neutral2:
  ⟦⟦ Φ @ [] ⟧⟧TESL = ⟦⟦ Φ ⟧⟧TESL
by simp

section ‹Decreasing interpretation of TESL formulae›
text‹
  Adding constraints to a TESL formula reduces the number of satisfying runs.
›
lemma TESL_sem_decreases_head:
  ⟦⟦ Φ ⟧⟧TESL  ⟦⟦ φ # Φ ⟧⟧TESL
by simp

lemma TESL_sem_decreases_tail:
  ⟦⟦ Φ ⟧⟧TESL  ⟦⟦ Φ @ [φ] ⟧⟧TESL
by (simp add: TESL_interp_homo_append)

text ‹
  Repeating a formula in a specification does not change the specification.
›
lemma TESL_interp_formula_stuttering:
  assumes φ  set Φ
    shows ⟦⟦ φ # Φ ⟧⟧TESL = ⟦⟦ Φ ⟧⟧TESL
proof -
  have φ # Φ = [φ] @ Φ by simp
  hence ⟦⟦ φ # Φ ⟧⟧TESL = ⟦⟦ [φ] ⟧⟧TESL  ⟦⟦ Φ ⟧⟧TESL
    using TESL_interp_homo_append by simp
  thus ?thesis using assms TESL_interpretation_image by fastforce
qed

text ‹
  Removing duplicate formulae in a specification does not change the specification.
›
lemma TESL_interp_remdups_absorb:
  ⟦⟦ Φ ⟧⟧TESL = ⟦⟦ remdups Φ ⟧⟧TESL
proof (induction Φ)
  case Cons
    thus ?case using TESL_interp_formula_stuttering by auto
qed simp

text ‹
  Specifications that contain the same formulae have the same semantics.
›
lemma TESL_interp_set_lifting:
  assumes ‹set Φ = set Φ'
    shows ⟦⟦ Φ ⟧⟧TESL = ⟦⟦ Φ' ⟧⟧TESL
proof -     
  have ‹set (remdups Φ) = set (remdups Φ')
    by (simp add: assms)
  moreover have fxpntΦ:  ((λφ.  φ TESL) ` set Φ) = ⟦⟦ Φ ⟧⟧TESL
    by (simp add: TESL_interpretation_image)
  moreover have fxpntΦ':  ((λφ.  φ TESL) ` set Φ') = ⟦⟦ Φ' ⟧⟧TESL
    by (simp add: TESL_interpretation_image)
  moreover have  ((λφ.  φ TESL) ` set Φ) =  ((λφ.  φ TESL) ` set Φ')
    by (simp add: assms)
  ultimately show ?thesis using TESL_interp_remdups_absorb by auto
qed

text ‹
  The semantics of specifications is contravariant with respect to their inclusion.
›
theorem TESL_interp_decreases_setinc:
  assumes ‹set Φ  set Φ'
    shows ⟦⟦ Φ ⟧⟧TESL  ⟦⟦ Φ' ⟧⟧TESL
proof -
  obtain Φr where decompose: ‹set (Φ @ Φr) = set Φ' using assms by auto
  hence ‹set (Φ @ Φr) = set Φ' using assms by blast
  moreover have (set Φ)  (set Φr) = set Φ'
    using assms decompose by auto
  moreover have ⟦⟦ Φ' ⟧⟧TESL = ⟦⟦ Φ @ Φr ⟧⟧TESL
    using TESL_interp_set_lifting decompose by blast
  moreover have ⟦⟦ Φ @ Φr ⟧⟧TESL = ⟦⟦ Φ ⟧⟧TESL  ⟦⟦ Φr ⟧⟧TESL
    by (simp add: TESL_interp_homo_append)
  moreover have ⟦⟦ Φ ⟧⟧TESL  ⟦⟦ Φ ⟧⟧TESL  ⟦⟦ Φr ⟧⟧TESL by simp
  ultimately show ?thesis by simp
qed

lemma TESL_interp_decreases_add_head:
  assumes ‹set Φ  set Φ'
    shows ⟦⟦ φ # Φ ⟧⟧TESL  ⟦⟦ φ # Φ' ⟧⟧TESL
using assms TESL_interp_decreases_setinc by auto

lemma TESL_interp_decreases_add_tail:
  assumes ‹set Φ  set Φ'
    shows ⟦⟦ Φ @ [φ] ⟧⟧TESL  ⟦⟦ Φ' @ [φ] ⟧⟧TESL
using TESL_interp_decreases_setinc[OF assms] 
  by (simp add: TESL_interpretation_image dual_order.trans)

lemma TESL_interp_absorb1:
  assumes ‹set Φ1  set Φ2
    shows ⟦⟦ Φ1 @ Φ2 ⟧⟧TESL = ⟦⟦ Φ2 ⟧⟧TESL
by (simp add: Int_absorb1 TESL_interp_decreases_setinc
                          TESL_interp_homo_append assms)

lemma TESL_interp_absorb2:
  assumes ‹set Φ2  set Φ1
    shows ⟦⟦ Φ1 @ Φ2 ⟧⟧TESL = ⟦⟦ Φ1 ⟧⟧TESL
using TESL_interp_absorb1 TESL_interp_commute assms by blast

section ‹Some special cases›

lemma NoSporadic_stable [simp]:
  ⟦⟦ Φ ⟧⟧TESL  ⟦⟦ NoSporadic Φ ⟧⟧TESL
proof -
  from filter_is_subset have ‹set (NoSporadic Φ)  set Φ .
  from TESL_interp_decreases_setinc[OF this] show ?thesis .
qed

lemma NoSporadic_idem [simp]:
  ⟦⟦ Φ ⟧⟧TESL  ⟦⟦ NoSporadic Φ ⟧⟧TESL = ⟦⟦ Φ ⟧⟧TESL
using NoSporadic_stable by blast

lemma NoSporadic_setinc:
  ‹set (NoSporadic Φ)  set Φ
by (rule filter_is_subset)

(*<*)
no_notation dummyT0    (t0)
no_notation dummyDeltaT (δt)
(*>*)

end

Theory SymbolicPrimitive

chapter ‹Symbolic Primitives for Building Runs›

theory SymbolicPrimitive
  imports Run

begin

text‹
  We define here the primitive constraints on runs, towards which we translate
  TESL specifications in the operational semantics.
  These constraints refer to a specific symbolic run and can therefore access
  properties of the run at particular instants (for instance, the fact that a clock
  ticks at instant @{term n} of the run, or the time on a given clock at 
  that instant).

  In the previous chapters, we had no reference to particular instants of a run 
  because the TESL language should be invariant by stuttering in order to allow 
  the composition of specifications: adding an instant where no clock ticks to 
  a run that satisfies a formula should yield another run that satisfies the 
  same formula. However, when constructing runs that satisfy a formula, we
  need to be able to refer to the time or hamlet of a clock at a given instant.
›

text‹
  Counter expressions are used to get the number of ticks of a clock up to 
  (strictly or not) a given instant index.
›
datatype cnt_expr =
  TickCountLess ‹clock› ‹instant_index› (#<)
| TickCountLeq ‹clock› ‹instant_index›  (#)

subsection‹ Symbolic Primitives for Runs ›

text‹
  Tag values are used to refer to the time on a clock at a given instant index.
›
datatype tag_val =
  TSchematic ‹clock * instant_index› (τvar)

datatype  constr =
― ‹@{term c  n @ τ} constrains clock @{term c} to have time @{term τ}
    at instant @{term n} of the run.›
  Timestamp     ‹clock›   ‹instant_index›  tag_const›         (‹_  _ @ _›)
― ‹@{term m @ n  δt  s} constrains clock @{term s} to tick at the
    first instant at which the time on @{term m} has increased by @{term δt}
    from the value it had at instant @{term n} of the run.›
| TimeDelay     ‹clock›   ‹instant_index›  tag_const› ‹clock› (‹_ @ _  _  _›)
― ‹@{term c  n} constrains clock @{term c} to tick
    at instant @{term n} of the run.›
| Ticks         ‹clock›   ‹instant_index›                        (‹_  _›)
― ‹@{term c ¬⇑ n} constrains clock @{term c} not to tick
    at instant @{term n} of the run.›
| NotTicks      ‹clock›   ‹instant_index›                        (‹_ ¬⇑ _›)
― ‹@{term c ¬⇑ < n} constrains clock @{term c} not to tick
    before instant @{term n} of the run.›
| NotTicksUntil ‹clock›   ‹instant_index›                        (‹_ ¬⇑ < _›)
― ‹@{term c ¬⇑  n} constrains clock @{term c} not to tick
    at and after instant @{term n} of the run.›
| NotTicksFrom  ‹clock›   ‹instant_index›                        (‹_ ¬⇑  _›)
― ‹@{term τ1, τ2  R} constrains tag variables @{term τ1} and  @{term τ2} 
    to be in relation @{term R}.›
| TagArith      ‹tag_val› ‹tag_val› ( tag_const ×  tag_const)  bool› (_, _  _›)
― ‹@{term k1, k2  R} constrains counter expressions @{term k1} and  @{term k2} 
    to be in relation @{term R}.›
| TickCntArith  ‹cnt_expr› ‹cnt_expr› (nat × nat)  bool›      (_, _  _›)
― ‹@{term k1  k2} constrains counter expression @{term k1} to be less or equal 
    to counter expression @{term k2}.›
| TickCntLeq    ‹cnt_expr› ‹cnt_expr›                            (‹_  _›)

type_synonym  system =  constr list›


text ‹
  The abstract machine has configurations composed of:
   the past @{termΓ}, which captures choices that have already be made as a 
    list of symbolic primitive constraints on the run;
   the current index @{term n}, which is the index of the present instant;
   the present @{termΨ}, which captures the formulae that must be satisfied
    in the current instant;
   the future @{termΦ}, which captures the constraints on the future of the run.
›
type_synonym  config =
                 system * instant_index *  TESL_formula *  TESL_formula›


section ‹Semantics of Primitive Constraints ›

text‹
  The semantics of the primitive constraints is defined in a way similar to
  the semantics of TESL formulae.
›
fun counter_expr_eval :: (::linordered_field) run  cnt_expr  nat›
  ( _  _ cntexpr)
where
   ρ  #< clk indx cntexpr = run_tick_count_strictly ρ clk indx
|  ρ  # clk indx cntexpr = run_tick_count ρ clk indx


fun symbolic_run_interpretation_primitive
  ::(::linordered_field) constr   run set› ( _ prim)
where
   K  n  prim     = {ρ. hamlet ((Rep_run ρ) n K) }
|  K @ n0  δt  K' prim =
                  {ρ. n  n0. first_time ρ K n (time ((Rep_run ρ) n0 K) + δt)
                                hamlet ((Rep_run ρ) n K')}
|  K ¬⇑ n prim     = {ρ. ¬hamlet ((Rep_run ρ) n K) }
|  K ¬⇑ < n prim   = {ρ. i < n. ¬ hamlet ((Rep_run ρ) i K)}
|  K ¬⇑  n prim   = {ρ. i  n. ¬ hamlet ((Rep_run ρ) i K) }
|  K  n @ τ prim = {ρ. time ((Rep_run ρ) n K) = τ }
|  τvar(K1, n1), τvar(K2, n2)  R prim =
    { ρ. R (time ((Rep_run ρ) n1 K1), time ((Rep_run ρ) n2 K2)) }
|  e1, e2  R prim = { ρ. R ( ρ  e1 cntexpr,  ρ  e2 cntexpr) }
|  cnt_e1  cnt_e2 prim = { ρ.  ρ  cnt_e1 cntexpr   ρ  cnt_e2 cntexpr }

text‹
  The composition of primitive constraints is their conjunction, and we get the
  set of satisfying runs by intersection.
›
fun symbolic_run_interpretation
  ::(::linordered_field) constr list  (::linordered_field) run set›
  (⟦⟦ _ ⟧⟧prim)
where
  ⟦⟦ [] ⟧⟧prim = {ρ. True }
| ⟦⟦ γ # Γ ⟧⟧prim =  γ prim  ⟦⟦ Γ ⟧⟧prim

lemma symbolic_run_interp_cons_morph:
   γ prim  ⟦⟦ Γ ⟧⟧prim = ⟦⟦ γ # Γ ⟧⟧prim
by auto

definition consistent_context :: (::linordered_field) constr list  bool›
where 
  consistent_context Γ  ( ⟦⟦ Γ ⟧⟧prim  {})

subsection ‹Defining a method for witness construction›

text‹
  In order to build a run, we can start from an initial run in which no clock 
  ticks and the time is always 0 on any clock.
›
abbreviation initial_run :: (::linordered_field) run› (ρ) where
  ρ  Abs_run ((λ_ _. (False, τcst 0)) ::nat  clock  (bool ×  tag_const))

text‹
  To help avoiding that time flows backward, setting the time on a clock at a given 
  instant sets it for the future instants too.
›
fun time_update
  :: ‹nat  clock  (::linordered_field) tag_const  (nat   instant)
       (nat   instant)
where
  time_update n K τ ρ = (λn' K'. if K = K'  n  n'
                                  then (hamlet (ρ n K), τ)
                                  else ρ n' K')


section ‹Rules and properties of consistence›

lemma context_consistency_preservationI:
  ‹consistent_context ((γ::(::linordered_field) constr)#Γ)  consistent_context Γ
unfolding consistent_context_def by auto

― ‹This is very restrictive›
inductive context_independency
  ::(::linordered_field) constr   constr list  bool› (‹_  _›)
where
  NotTicks_independency:
  (K  n)  set Γ  (K ¬⇑ n)  Γ
| Ticks_independency:
  (K ¬⇑ n)  set Γ  (K  n)  Γ
| Timestamp_independency:
  (τ'. τ' = τ  (K  n @ τ)  set Γ)  (K  n @ τ)  Γ


section‹Major Theorems›
subsection ‹Interpretation of a context›

text‹
  The interpretation of a context is the intersection of the interpretation 
  of its components.
›
theorem symrun_interp_fixpoint:
   ((λγ.  γ prim) ` set Γ) = ⟦⟦ Γ ⟧⟧prim
by (induction Γ, simp+)

subsection ‹Expansion law›
text ‹Similar to the expansion laws of lattices›

theorem symrun_interp_expansion:
  ⟦⟦ Γ1 @ Γ2 ⟧⟧prim = ⟦⟦ Γ1 ⟧⟧prim  ⟦⟦ Γ2 ⟧⟧prim
by (induction Γ1, simp, auto)

section ‹Equations for the interpretation of symbolic primitives› 
subsection ‹General laws›

lemma symrun_interp_assoc:
  ⟦⟦ (Γ1 @ Γ2) @ Γ3 ⟧⟧prim = ⟦⟦ Γ1 @ (Γ2 @ Γ3) ⟧⟧prim
by auto

lemma symrun_interp_commute:
  ⟦⟦ Γ1 @ Γ2 ⟧⟧prim = ⟦⟦ Γ2 @ Γ1 ⟧⟧prim
by (simp add: symrun_interp_expansion inf_sup_aci(1))

lemma symrun_interp_left_commute:
  ⟦⟦ Γ1 @ (Γ2 @ Γ3) ⟧⟧prim = ⟦⟦ Γ2 @ (Γ1 @ Γ3) ⟧⟧prim
unfolding symrun_interp_expansion by auto

lemma symrun_interp_idem:
  ⟦⟦ Γ @ Γ ⟧⟧prim = ⟦⟦ Γ ⟧⟧prim
using symrun_interp_expansion by auto

lemma symrun_interp_left_idem:
  ⟦⟦ Γ1 @ (Γ1 @ Γ2) ⟧⟧prim = ⟦⟦ Γ1 @ Γ2 ⟧⟧prim
using symrun_interp_expansion by auto

lemma symrun_interp_right_idem:
  ⟦⟦ (Γ1 @ Γ2) @ Γ2 ⟧⟧prim = ⟦⟦ Γ1 @ Γ2 ⟧⟧prim
unfolding symrun_interp_expansion by auto

lemmas symrun_interp_aci =  symrun_interp_commute
                            symrun_interp_assoc
                            symrun_interp_left_commute
                            symrun_interp_left_idem

― ‹Identity element›
lemma symrun_interp_neutral1:
  ⟦⟦ [] @ Γ ⟧⟧prim = ⟦⟦ Γ ⟧⟧prim
by simp

lemma symrun_interp_neutral2:
  ⟦⟦ Γ @ [] ⟧⟧prim = ⟦⟦ Γ ⟧⟧prim
by simp

subsection ‹Decreasing interpretation of symbolic primitives›

text‹
  Adding constraints to a context reduces the number of satisfying runs.
›
lemma TESL_sem_decreases_head:
  ⟦⟦ Γ ⟧⟧prim  ⟦⟦ γ # Γ ⟧⟧prim
by simp

lemma TESL_sem_decreases_tail:
  ⟦⟦ Γ ⟧⟧prim  ⟦⟦ Γ @ [γ] ⟧⟧prim
by (simp add: symrun_interp_expansion)

text‹
  Adding a constraint that is already in the context does not change the
  interpretation of the context.
›
lemma symrun_interp_formula_stuttering:
  assumes γ  set Γ
    shows ⟦⟦ γ # Γ ⟧⟧prim = ⟦⟦ Γ ⟧⟧prim
proof -
  have γ # Γ = [γ] @ Γ by simp
  hence ⟦⟦ γ # Γ ⟧⟧prim = ⟦⟦ [γ] ⟧⟧prim  ⟦⟦ Γ ⟧⟧prim
    using symrun_interp_expansion by simp
  thus ?thesis using assms symrun_interp_fixpoint by fastforce
qed


text‹
  Removing duplicate constraints from a context does not change the
  interpretation of the context.
›
lemma symrun_interp_remdups_absorb:
  ⟦⟦ Γ ⟧⟧prim = ⟦⟦ remdups Γ ⟧⟧prim
proof (induction Γ)
  case Cons
    thus ?case using symrun_interp_formula_stuttering by auto
qed simp

text‹
  Two identical sets of constraints have the same interpretation,
  the order in the context does not matter.
›
lemma symrun_interp_set_lifting:
  assumes ‹set Γ = set Γ'
    shows ⟦⟦ Γ ⟧⟧prim = ⟦⟦ Γ' ⟧⟧prim
proof -     
  have ‹set (remdups Γ) = set (remdups Γ')
    by (simp add: assms)
  moreover have fxpntΓ:  ((λγ.  γ prim) ` set Γ) = ⟦⟦ Γ ⟧⟧prim
    by (simp add: symrun_interp_fixpoint)
  moreover have fxpntΓ':  ((λγ.  γ prim) ` set Γ') = ⟦⟦ Γ' ⟧⟧prim
    by (simp add: symrun_interp_fixpoint)
  moreover have  ((λγ.  γ prim) ` set Γ) =  ((λγ.  γ prim) ` set Γ')
    by (simp add: assms)
  ultimately show ?thesis using symrun_interp_remdups_absorb by auto
qed

text‹
  The interpretation of contexts is contravariant with regard to set inclusion.
›
theorem symrun_interp_decreases_setinc:
  assumes ‹set Γ  set Γ'
    shows ⟦⟦ Γ ⟧⟧prim  ⟦⟦ Γ' ⟧⟧prim
proof -
  obtain Γr where decompose: ‹set (Γ @ Γr) = set Γ' using assms by auto
  hence ‹set (Γ @ Γr) = set Γ' using assms by blast
  moreover have (set Γ)  (set Γr) = set Γ' using assms decompose by auto
  moreover have ⟦⟦ Γ' ⟧⟧prim = ⟦⟦ Γ @ Γr ⟧⟧prim
    using symrun_interp_set_lifting decompose by blast
  moreover have ⟦⟦ Γ @ Γr ⟧⟧prim = ⟦⟦ Γ ⟧⟧prim  ⟦⟦ Γr ⟧⟧prim
    by (simp add: symrun_interp_expansion)
  moreover have ⟦⟦ Γ ⟧⟧prim  ⟦⟦ Γ ⟧⟧prim  ⟦⟦ Γr ⟧⟧prim by simp
  ultimately show ?thesis by simp
qed

lemma symrun_interp_decreases_add_head:
  assumes ‹set Γ  set Γ'
    shows ⟦⟦ γ # Γ ⟧⟧prim  ⟦⟦ γ # Γ' ⟧⟧prim
using symrun_interp_decreases_setinc assms by auto

lemma symrun_interp_decreases_add_tail:
  assumes ‹set Γ  set Γ'
    shows ⟦⟦ Γ @ [γ] ⟧⟧prim  ⟦⟦ Γ' @ [γ] ⟧⟧prim
proof -
  from symrun_interp_decreases_setinc[OF assms] have ⟦⟦ Γ' ⟧⟧prim  ⟦⟦ Γ ⟧⟧prim .
  thus ?thesis by (simp add: symrun_interp_expansion dual_order.trans)
qed

lemma symrun_interp_absorb1:
  assumes ‹set Γ1  set Γ2
    shows ⟦⟦ Γ1 @ Γ2 ⟧⟧prim = ⟦⟦ Γ2 ⟧⟧prim
by (simp add: Int_absorb1 symrun_interp_decreases_setinc
                          symrun_interp_expansion assms)

lemma symrun_interp_absorb2:
  assumes ‹set Γ2  set Γ1
    shows ⟦⟦ Γ1 @ Γ2 ⟧⟧prim = ⟦⟦ Γ1 ⟧⟧prim
using symrun_interp_absorb1 symrun_interp_commute assms by blast

end

Theory Operational

chapter ‹Operational Semantics›
text‹\label{chap:operational_semantics}›

theory Operational
imports
  SymbolicPrimitive

begin

text‹
  The operational semantics defines rules to build symbolic runs from a TESL 
  specification (a set of TESL formulae).
  Symbolic runs are described using the symbolic primitives presented in the 
  previous chapter.
  Therefore, the operational semantics compiles a set of constraints on runs, 
  as defined by the denotational semantics, into a set of symbolic constraints 
  on the instants of the runs. Concrete runs can then be obtained by solving the 
  constraints at each instant.
›

section‹Operational steps›

text ‹
  We introduce a notation to describe configurations:
   @{term Γ} is the context, the set of symbolic constraints on past instants of the run;
   @{term n} is the index of the current instant, the present;
   @{term Ψ} is the TESL formula that must be satisfied at the current instant (present);
   @{term Φ} is the TESL formula that must be satisfied for the following instants (the future).
›
abbreviation uncurry_conf
  ::(::linordered_field) system  instant_index   TESL_formula   TESL_formula
        config›                                                  (‹_, _  _  _› 80)
where
  Γ, n  Ψ  Φ  (Γ, n, Ψ, Φ)

text ‹
  The only introduction rule allows us to progress to the next instant 
  when there are no more constraints to satisfy for the present instant.
›
inductive operational_semantics_intro
  ::(::linordered_field) config   config  bool›              (‹_ i _› 70)
where
  instant_i:
  (Γ, n  []  Φ) i  (Γ, Suc n  Φ  [])

text ‹
  The elimination rules describe how TESL formulae for the present are transformed 
  into constraints on the past and on the future.
›
inductive operational_semantics_elim
  ::(::linordered_field) config   config  bool›              (‹_ e _› 70)
where
  sporadic_on_e1:
― ‹A sporadic constraint can be ignored in the present and rejected into the future.›
  (Γ, n  ((K1 sporadic τ on K2) # Ψ)  Φ)
     e  (Γ, n  Ψ  ((K1 sporadic τ on K2) # Φ))
| sporadic_on_e2:
― ‹It can also be handled in the present by making the clock tick and have 
  the expected time. Once it has been handled, it is no longer a constraint 
  to satisfy, so it disappears from the future.›
  (Γ, n  ((K1 sporadic τ on K2) # Ψ)  Φ)
     e  (((K1  n) # (K2  n @ τ) # Γ), n  Ψ  Φ)
| tagrel_e:
― ‹A relation between time scales has to be obeyed at every instant.›
  (Γ, n  ((time-relation K1, K2  R) # Ψ)  Φ)
     e  (((τvar(K1, n), τvar(K2, n)  R) # Γ), n
               Ψ  ((time-relation K1, K2  R) # Φ))
| implies_e1:
― ‹An implication can be handled in the present by forbidding a tick of the master
  clock. The implication is copied back into the future because it holds for 
  the whole run.›
  (Γ, n  ((K1 implies K2) # Ψ)  Φ)
     e  (((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies K2) # Φ))
| implies_e2:
― ‹It can also be handled in the present by making both the master and the slave
    clocks tick.›
  (Γ, n  ((K1 implies K2) # Ψ)  Φ)
     e  (((K1  n) # (K2  n) # Γ), n  Ψ  ((K1 implies K2) # Φ))
| implies_not_e1:
― ‹A negative implication can be handled in the present by forbidding a tick of 
  the master clock. The implication is copied back into the future because 
  it holds for the whole run.›
  (Γ, n  ((K1 implies not K2) # Ψ)  Φ)
     e  (((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ))
| implies_not_e2:
― ‹It can also be handled in the present by making the master clock ticks and 
    forbidding a tick on the slave clock.›
  (Γ, n  ((K1 implies not K2) # Ψ)  Φ)
     e  (((K1  n) # (K2 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ))
| timedelayed_e1:
― ‹A timed delayed implication can be handled by forbidding a tick on 
    the master clock.›
  (Γ, n  ((K1 time-delayed by δτ on K2 implies K3) # Ψ)  Φ)
     e  (((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ))
| timedelayed_e2:
― ‹It can also be handled by making the master clock tick and adding a constraint 
    that makes the slave clock tick when the delay has elapsed on the measuring clock.›
  (Γ, n  ((K1 time-delayed by δτ on K2 implies K3) # Ψ)  Φ)
     e  (((K1  n) # (K2 @ n  δτ  K3) # Γ), n
             Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ))
| weakly_precedes_e:
― ‹A weak precedence relation has to hold at every instant.›
  (Γ, n  ((K1 weakly precedes K2) # Ψ)  Φ)
     e  (((# K2 n, # K1 n  (λ(x,y). xy)) # Γ), n
             Ψ  ((K1 weakly precedes K2) # Φ))
| strictly_precedes_e:
― ‹A strict precedence relation has to hold at every instant.›
  (Γ, n  ((K1 strictly precedes K2) # Ψ)  Φ)
     e  (((# K2 n, #< K1 n  (λ(x,y). xy)) # Γ), n
             Ψ  ((K1 strictly precedes K2) # Φ))
| kills_e1:
― ‹A kill can be handled by forbidding a tick of the triggering clock.›
  (Γ, n  ((K1 kills K2) # Ψ)  Φ)
     e  (((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 kills K2) # Φ))
| kills_e2:
― ‹It can also be handled by making the triggering clock tick and by forbidding 
    any further tick of the killed clock.›
  (Γ, n  ((K1 kills K2) # Ψ)  Φ)
     e  (((K1  n) # (K2 ¬⇑  n) # Γ), n  Ψ  ((K1 kills K2) # Φ))

text ‹
  A step of the operational semantics is either the application of the introduction 
  rule or the application of an elimination rule.
›
inductive operational_semantics_step
  ::(::linordered_field) config   config  bool›              (‹_  _› 70)
where
  intro_part:
  (Γ1, n1  Ψ1  Φ1)  i  (Γ2, n2  Ψ2  Φ2)
     (Γ1, n1  Ψ1  Φ1)    (Γ2, n2  Ψ2  Φ2)
| elims_part:
  (Γ1, n1  Ψ1  Φ1)  e  (Γ2, n2  Ψ2  Φ2)
     (Γ1, n1  Ψ1  Φ1)    (Γ2, n2  Ψ2  Φ2)

text ‹
  We introduce notations for the reflexive transitive closure of the operational 
  semantic step, its transitive closure and its reflexive closure.
›
abbreviation operational_semantics_step_rtranclp
  ::(::linordered_field) config   config  bool›              (‹_ ** _› 70)
where
  𝒞1 ** 𝒞2  operational_semantics_step** 𝒞1 𝒞2

abbreviation operational_semantics_step_tranclp
  ::(::linordered_field) config   config  bool›              (‹_ ++ _› 70)
where
  𝒞1 ++ 𝒞2  operational_semantics_step++ 𝒞1 𝒞2

abbreviation operational_semantics_step_reflclp
  ::(::linordered_field) config   config  bool›              (‹_ == _› 70)
where
  𝒞1 == 𝒞2  operational_semantics_step== 𝒞1 𝒞2

abbreviation operational_semantics_step_relpowp
  ::(::linordered_field) config  nat   config  bool›       (‹_ ↪⇗_ _› 70)
where
  𝒞1n 𝒞2  (operational_semantics_step ^^ n) 𝒞1 𝒞2

definition operational_semantics_elim_inv
  ::(::linordered_field) config   config  bool›              (‹_ e _› 70)
where
  𝒞1 e 𝒞2  𝒞2 e 𝒞1


section‹Basic Lemmas›

text ‹
  If a configuration can be reached in @{term m} steps from a configuration that 
  can be reached in @{term n} steps from an original configuration, then it can be 
  reached in @{term n+m} steps from the original configuration.
›
lemma operational_semantics_trans_generalized:
  assumes 𝒞1n 𝒞2
  assumes 𝒞2m 𝒞3
    shows 𝒞1n + m 𝒞3
using relcompp.relcompI[of  ‹operational_semantics_step ^^ n _ _ 
                            ‹operational_semantics_step ^^ m, OF assms]
by (simp add: relpowp_add) 

text ‹
  We consider the set of configurations that can be reached in one operational
  step from a given configuration.
›
abbreviation Cnext_solve
  ::(::linordered_field) config   config set› (𝒞next _›)
where
  𝒞next 𝒮  { 𝒮'. 𝒮  𝒮' }

text ‹
  Advancing to the next instant is possible when there are no more constraints 
  on the current instant.
›
lemma Cnext_solve_instant:
  (𝒞next (Γ, n  []  Φ))  { Γ, Suc n  Φ  [] }
by (simp add: operational_semantics_step.simps operational_semantics_intro.instant_i)

text ‹
  The following lemmas state that the configurations produced by the elimination 
  rules of the operational semantics belong to the configurations that can be 
  reached in one step.
›
lemma Cnext_solve_sporadicon:
  (𝒞next (Γ, n  ((K1 sporadic τ on K2) # Ψ)  Φ))
     { Γ, n  Ψ  ((K1 sporadic τ on K2) # Φ),
        ((K1  n) # (K2  n @ τ) # Γ), n  Ψ  Φ }
by (simp add: operational_semantics_step.simps
              operational_semantics_elim.sporadic_on_e1
              operational_semantics_elim.sporadic_on_e2)

lemma Cnext_solve_tagrel:
  (𝒞next (Γ, n  ((time-relation K1, K2  R) # Ψ)  Φ))
     { ((τvar(K1, n), τvar(K2, n)  R) # Γ),n
           Ψ  ((time-relation K1, K2  R) # Φ) }
by (simp add: operational_semantics_step.simps operational_semantics_elim.tagrel_e)

lemma Cnext_solve_implies:
  (𝒞next (Γ, n  ((K1 implies K2) # Ψ)  Φ))
     { ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies K2) # Φ),
         ((K1  n) # (K2  n) # Γ), n  Ψ  ((K1 implies K2) # Φ) }
by (simp add: operational_semantics_step.simps operational_semantics_elim.implies_e1
              operational_semantics_elim.implies_e2)

lemma Cnext_solve_implies_not:
  (𝒞next (Γ, n  ((K1 implies not K2) # Ψ)  Φ))
     { ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ),
        ((K1  n) # (K2 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ) }
by (simp add: operational_semantics_step.simps
              operational_semantics_elim.implies_not_e1
              operational_semantics_elim.implies_not_e2)

lemma Cnext_solve_timedelayed:
  (𝒞next (Γ, n  ((K1 time-delayed by δτ on K2 implies K3) # Ψ)  Φ))
     { ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ),
        ((K1  n) # (K2 @ n  δτ  K3) # Γ), n
           Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ) }
by (simp add: operational_semantics_step.simps
              operational_semantics_elim.timedelayed_e1
              operational_semantics_elim.timedelayed_e2)

lemma Cnext_solve_weakly_precedes:
  (𝒞next (Γ, n  ((K1 weakly precedes K2) # Ψ)  Φ))
     { ((# K2 n, # K1 n  (λ(x,y). xy)) # Γ), n
           Ψ  ((K1 weakly precedes K2) # Φ) }
by (simp add: operational_semantics_step.simps
              operational_semantics_elim.weakly_precedes_e)

lemma Cnext_solve_strictly_precedes:
  (𝒞next (Γ, n  ((K1 strictly precedes K2) # Ψ)  Φ))
     { ((# K2 n, #< K1 n  (λ(x,y). xy)) # Γ), n
           Ψ  ((K1 strictly precedes K2) # Φ) }
by (simp add: operational_semantics_step.simps
              operational_semantics_elim.strictly_precedes_e)

lemma Cnext_solve_kills:
  (𝒞next (Γ, n  ((K1 kills K2) # Ψ)  Φ))
     { ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 kills K2) # Φ),
        ((K1  n) # (K2 ¬⇑  n) # Γ), n  Ψ  ((K1 kills K2) # Φ) }
by (simp add: operational_semantics_step.simps operational_semantics_elim.kills_e1
              operational_semantics_elim.kills_e2)

text ‹
  An empty specification can be reduced to an empty specification for 
  an arbitrary number of steps.
›
lemma empty_spec_reductions:
  ([], 0  []  [])k ([], k  []  [])
proof (induct k)
  case 0 thus ?case by simp
next
  case Suc thus ?case
    using instant_i operational_semantics_step.simps by fastforce 
qed

end

Theory Corecursive_Prop

(*chapter‹Equivalence of Operational and Denotational Semantics›*)
text‹\chapter[Semantics Equivalence]{Equivalence of the Operational and Denotational Semantics}›

theory Corecursive_Prop
  imports
    SymbolicPrimitive
    Operational
    Denotational

begin

section ‹Stepwise denotational interpretation of TESL atoms›

text ‹
  In order to prove the equivalence of the denotational and operational semantics, 
  we need to be able to ignore the past (for which the constraints are encoded 
  in the context) and consider only the satisfaction of the constraints from
  a given instant index.
  For this purpose, we define an interpretation of TESL formulae for a suffix of a run.
  That interpretation is closely related to the denotational semantics as
  defined in the preceding chapters.
›
fun TESL_interpretation_atomic_stepwise
    :: (::linordered_field) TESL_atomic  nat   run set› ( _ TESL⇗≥ _)
where
   K1 sporadic τ on K2TESLi =
      {ρ. ni. hamlet ((Rep_run ρ) n K1)  time ((Rep_run ρ) n K2) = τ}
|  time-relation K1, K2  RTESLi =
      {ρ. ni. R (time ((Rep_run ρ) n K1), time ((Rep_run ρ) n K2))}
|  master implies slaveTESLi =
      {ρ. ni. hamlet ((Rep_run ρ) n master)  hamlet ((Rep_run ρ) n slave)}
|  master implies not slaveTESLi =
      {ρ. ni. hamlet ((Rep_run ρ) n master)  ¬ hamlet ((Rep_run ρ) n slave)}
|  master time-delayed by δτ on measuring implies slaveTESLi =
      {ρ. ni. hamlet ((Rep_run ρ) n master) 
               (let measured_time = time ((Rep_run ρ) n measuring) in
                m  n. first_time ρ measuring m (measured_time + δτ)
                          hamlet ((Rep_run ρ) m slave)
               )
      }
|  K1 weakly precedes K2TESLi =
      {ρ. ni. (run_tick_count ρ K2 n)  (run_tick_count ρ K1 n)}
|  K1 strictly precedes K2TESLi =
      {ρ. ni. (run_tick_count ρ K2 n)  (run_tick_count_strictly ρ K1 n)}
|  K1 kills K2TESLi =
      {ρ. ni. hamlet ((Rep_run ρ) n K1)  (mn. ¬ hamlet ((Rep_run ρ) m K2))}

text ‹
  The denotational interpretation of TESL formulae can be unfolded into the 
  stepwise interpretation.
›
lemma TESL_interp_unfold_stepwise_sporadicon:
   K1 sporadic τ on K2 TESL =  {Y. n::nat. Y =  K1 sporadic τ on K2TESLn}
by auto

lemma TESL_interp_unfold_stepwise_tagrelgen:
   time-relation K1, K2  R TESL
    =  {Y. n::nat. Y =  time-relation K1, K2  RTESLn}
by auto

lemma TESL_interp_unfold_stepwise_implies:
   master implies slave TESL
    =  {Y. n::nat. Y =  master implies slaveTESLn}
by auto

lemma TESL_interp_unfold_stepwise_implies_not:
   master implies not slave TESL
    =  {Y. n::nat. Y =  master implies not slaveTESLn}
by auto

lemma TESL_interp_unfold_stepwise_timedelayed:
   master time-delayed by δτ on measuring implies slave TESL
    =  {Y. n::nat.
          Y =  master time-delayed by δτ on measuring implies slaveTESLn}
by auto

lemma TESL_interp_unfold_stepwise_weakly_precedes:
   K1 weakly precedes K2 TESL
    =  {Y. n::nat. Y =  K1 weakly precedes K2TESLn}
by auto

lemma TESL_interp_unfold_stepwise_strictly_precedes:
   K1 strictly precedes K2 TESL
    =  {Y. n::nat. Y =  K1 strictly precedes K2TESLn}
by auto

lemma TESL_interp_unfold_stepwise_kills:
   master kills slave TESL =  {Y. n::nat. Y =  master kills slaveTESLn}
by auto

text ‹
  Positive atomic formulae (the ones that create ticks from nothing) are unfolded
  as the union of the stepwise interpretations.
›
theorem TESL_interp_unfold_stepwise_positive_atoms:
  assumes ‹positive_atom φ
    shows  φ::::linordered_field TESL_atomic TESL
            =  {Y. n::nat. Y =  φTESLn}
proof -
  from positive_atom.elims(2)[OF assms]
    obtain u v w where φ = (u sporadic v on w) by blast
  with TESL_interp_unfold_stepwise_sporadicon show ?thesis by simp
qed

text ‹
  Negative atomic formulae are unfolded
  as the intersection of the stepwise interpretations.
›
theorem TESL_interp_unfold_stepwise_negative_atoms:
  assumes ¬ positive_atom φ
    shows  φ TESL =  {Y. n::nat. Y =  φTESLn}
proof (cases φ)
  case SporadicOn thus ?thesis using assms by simp
next
  case TagRelation
    thus ?thesis using TESL_interp_unfold_stepwise_tagrelgen by simp
next
  case Implies
    thus ?thesis using TESL_interp_unfold_stepwise_implies by simp
next
  case ImpliesNot
    thus ?thesis using TESL_interp_unfold_stepwise_implies_not by simp
next
  case TimeDelayedBy
    thus ?thesis using TESL_interp_unfold_stepwise_timedelayed by simp
next
  case WeaklyPrecedes
    thus ?thesis
      using TESL_interp_unfold_stepwise_weakly_precedes by simp
next
  case StrictlyPrecedes
    thus ?thesis
      using TESL_interp_unfold_stepwise_strictly_precedes by simp
next
  case Kills
    thus ?thesis
      using TESL_interp_unfold_stepwise_kills by simp
qed

text ‹
  Some useful lemmas for reasoning on properties of sequences.
›
lemma forall_nat_expansion:
  (n  (n0::nat). P n) = (P n0  (n  Suc n0. P n))
proof -
  have (n  (n0::nat). P n) = (n. (n = n0  n > n0)  P n)
    using le_less by blast
  also have ... = (P n0  (n > n0. P n)) by blast
  finally show ?thesis using Suc_le_eq by simp
qed

lemma exists_nat_expansion:
  (n  (n0::nat). P n) = (P n0  (n  Suc n0. P n))
proof -
  have (n  (n0::nat). P n) = (n. (n = n0  n > n0)  P n)
    using le_less by blast
  also have ... = (n. (P n0)  (n > n0  P n)) by blast
  finally show ?thesis using Suc_le_eq by simp
qed

lemma forall_nat_set_suc:{x. m  n. P x m} = {x. P x n}  {x. m  Suc n. P x m}
proof
  { fix x assume h:x  {x. m  n. P x m}
    hence P x n by simp
    moreover from h have x  {x. m  Suc n. P x m} by simp
    ultimately have x  {x. P x n}  {x. m  Suc n. P x m} by simp
  } thus {x. m  n. P x m}  {x. P x n}  {x. m  Suc n. P x m} ..
next
  { fix x  assume h:x  {x. P x n}  {x. m  Suc n. P x m}
    hence P x n by simp
    moreover from h have m  Suc n. P x m by simp
    ultimately have m  n. P x m using forall_nat_expansion by blast
    hence x  {x. m  n. P x m} by simp
  } thus {x. P x n}  {x. m  Suc n. P x m}  {x. m  n. P x m} ..
qed

lemma exists_nat_set_suc:{x. m  n. P x m} = {x. P x n}  {x. m  Suc n. P x m}
proof
  { fix x assume h:x  {x. m  n. P x m}
    hence x  {x. m. (m = n  m  Suc n)  P x m}
      using Suc_le_eq antisym_conv2 by fastforce
    hence x  {x. P x n}  {x. m  Suc n. P x m} by blast
  } thus {x. m  n. P x m}  {x. P x n}  {x. m  Suc n. P x m} ..
next
  { fix x  assume h:x  {x. P x n}  {x. m  Suc n. P x m}
    hence x  {x. m  n. P x m} using Suc_leD by blast
  } thus {x. P x n}  {x. m  Suc n. P x m}  {x. m  n. P x m} ..
qed

section ‹Coinduction Unfolding Properties›

text ‹
  The following lemmas show how  to shorten a suffix, i.e. to unfold one instant 
  in the construction of a run. They correspond to the rules of the operational 
  semantics.
›
lemma TESL_interp_stepwise_sporadicon_coind_unfold:
   K1 sporadic τ on K2TESLn =
     K1  n prim   K2  n @ τ prim        ― ‹rule @{term sporadic_on_e2}
      K1 sporadic τ on K2TESL≥ Suc n   ― ‹rule @{term sporadic_on_e1}
unfolding TESL_interpretation_atomic_stepwise.simps(1)
          symbolic_run_interpretation_primitive.simps(1,6)
using exists_nat_set_suc[of n λρ n. hamlet (Rep_run ρ n K1)
                                      time (Rep_run ρ n K2) = τ]
by (simp add: Collect_conj_eq)


lemma TESL_interp_stepwise_tagrel_coind_unfold:
   time-relation K1, K2  RTESLn =        ― ‹rule @{term tagrel_e}
      τvar(K1, n), τvar(K2, n)  R prim
       time-relation K1, K2  RTESL≥ Suc n
proof -
  have {ρ. mn. R (time ((Rep_run ρ) m K1), time ((Rep_run ρ) m K2))}
       = {ρ. R (time ((Rep_run ρ) n K1), time ((Rep_run ρ) n K2))}
        {ρ. mSuc n. R (time ((Rep_run ρ) m K1), time ((Rep_run ρ) m K2))}
    using forall_nat_set_suc[of n λx y. R (time ((Rep_run x) y K1),
                                       time ((Rep_run x) y K2))] by simp
  thus ?thesis by auto
qed

lemma TESL_interp_stepwise_implies_coind_unfold:
   master implies slaveTESLn =
     (    master ¬⇑ n prim                     ― ‹rule @{term implies_e1}
         master  n prim   slave  n prim)  ― ‹rule @{term implies_e2}
       master implies slaveTESL≥ Suc n
proof -
  have {ρ. mn. hamlet ((Rep_run ρ) m master)  hamlet ((Rep_run ρ) m slave)}
        = {ρ. hamlet ((Rep_run ρ) n master)  hamlet ((Rep_run ρ) n slave)}
         {ρ. mSuc n. hamlet ((Rep_run ρ) m master)
                      hamlet ((Rep_run ρ) m slave)}
    using forall_nat_set_suc[of n λx y. hamlet ((Rep_run x) y master)
                                 hamlet ((Rep_run x) y slave)] by simp
  thus ?thesis by auto
qed

lemma TESL_interp_stepwise_implies_not_coind_unfold:
   master implies not slaveTESLn =
     (     master ¬⇑ n prim                       ― ‹rule @{term implies_not_e1}
          master  n prim   slave ¬⇑ n prim)  ― ‹rule @{term implies_not_e2}
       master implies not slaveTESL≥ Suc n
proof -
  have {ρ. mn. hamlet ((Rep_run ρ) m master)  ¬ hamlet ((Rep_run ρ) m slave)}
       = {ρ. hamlet ((Rep_run ρ) n master)  ¬ hamlet ((Rep_run ρ) n slave)}
           {ρ. mSuc n. hamlet ((Rep_run ρ) m master)
                      ¬ hamlet ((Rep_run ρ) m slave)}
    using forall_nat_set_suc[of n λx y. hamlet ((Rep_run x) y master)
                                ¬hamlet ((Rep_run x) y slave)] by simp
  thus ?thesis by auto
qed

lemma TESL_interp_stepwise_timedelayed_coind_unfold:
   master time-delayed by δτ on measuring implies slaveTESLn =
     (      master ¬⇑ n prim               ― ‹rule @{term timedelayed_e1}
         ( master  n prim   measuring @ n  δτ  slave prim))
                                             ― ‹rule @{term timedelayed_e2}
       master time-delayed by δτ on measuring implies slaveTESL≥ Suc n
proof -
  let ?prop = λρ m. hamlet ((Rep_run ρ) m master) 
                 (let measured_time = time ((Rep_run ρ) m measuring) in
                  p  m. first_time ρ measuring p (measured_time + δτ)
                            hamlet ((Rep_run ρ) p slave))
  have {ρ. m  n. ?prop ρ m} = {ρ. ?prop ρ n}  {ρ. m  Suc n. ?prop ρ m}
    using forall_nat_set_suc[of n ?prop] by blast
  also have ... = {ρ. ?prop ρ n}
                master time-delayed by δτ on measuring implies slaveTESL≥ Suc n
    by simp
  finally show ?thesis by auto
qed

lemma TESL_interp_stepwise_weakly_precedes_coind_unfold:
    K1 weakly precedes K2TESLn =                 ― ‹rule @{term weakly_precedes_e}
       (# K2 n, # K1 n  (λ(x,y). xy)) prim 
        K1 weakly precedes K2TESL≥ Suc n
proof -
  have {ρ. pn. (run_tick_count ρ K2 p)  (run_tick_count ρ K1 p)}
         = {ρ. (run_tick_count ρ K2 n)  (run_tick_count ρ K1 n)}
          {ρ. pSuc n. (run_tick_count ρ K2 p)  (run_tick_count ρ K1 p)}
    using forall_nat_set_suc[of n λρ n. (run_tick_count ρ K2 n)
                                   (run_tick_count ρ K1 n)]
    by simp
  thus ?thesis by auto
qed

lemma TESL_interp_stepwise_strictly_precedes_coind_unfold:
    K1 strictly precedes K2TESLn =               ― ‹rule @{term strictly_precedes_e}
       (# K2 n, #< K1 n  (λ(x,y). xy)) prim
        K1 strictly precedes K2TESL≥ Suc n
proof -
  have {ρ. pn. (run_tick_count ρ K2 p)  (run_tick_count_strictly ρ K1 p)}
         = {ρ. (run_tick_count ρ K2 n)  (run_tick_count_strictly ρ K1 n)}
          {ρ. pSuc n. (run_tick_count ρ K2 p)  (run_tick_count_strictly ρ K1 p)}
    using forall_nat_set_suc[of n λρ n. (run_tick_count ρ K2 n)
                                   (run_tick_count_strictly ρ K1 n)]
    by simp
  thus ?thesis by auto
qed

lemma TESL_interp_stepwise_kills_coind_unfold:
    K1 kills K2TESLn =
      (    K1 ¬⇑ n prim                        ― ‹rule @{term kills_e1}
          K1  n prim   K2 ¬⇑  n prim)    ― ‹rule @{term kills_e2}
        K1 kills K2TESL≥ Suc n
proof -
  let ?kills = λn ρ. pn. hamlet ((Rep_run ρ) p K1)
                              (mp. ¬ hamlet ((Rep_run ρ) m K2))
  let ?ticks = λn ρ c. hamlet ((Rep_run ρ) n c)
  let ?dead = λn ρ c. m  n. ¬hamlet ((Rep_run ρ) m c)
  have  K1 kills K2TESLn = {ρ. ?kills n ρ} by simp
  also have ... = ({ρ. ¬ ?ticks n ρ K1}   {ρ. ?kills (Suc n) ρ})
                  ({ρ. ?ticks n ρ K1}  {ρ. ?dead n ρ K2})
  proof
    { fix ρ::::linordered_field run›
      assume ρ  {ρ. ?kills n ρ}
      hence ?kills n ρ by simp
      hence (?ticks n ρ K1  ?dead n ρ K2)  (¬?ticks n ρ K1  ?kills (Suc n) ρ)
        using Suc_leD by blast
      hence ρ  ({ρ. ?ticks n ρ K1}  {ρ. ?dead n ρ K2})
                ({ρ. ¬ ?ticks n ρ K1}  {ρ. ?kills (Suc n) ρ})
        by blast
    } thus {ρ. ?kills n ρ}
            {ρ. ¬ ?ticks n ρ K1}  {ρ. ?kills (Suc n) ρ} 
             {ρ. ?ticks n ρ K1}  {ρ. ?dead n ρ K2} by blast
  next
    { fix ρ::::linordered_field run›
      assume ρ  ({ρ. ¬ ?ticks n ρ K1}   {ρ. ?kills (Suc n) ρ})
                  ({ρ. ?ticks n ρ K1}  {ρ. ?dead n ρ K2})
      hence ¬ ?ticks n ρ K1  ?kills (Suc n) ρ
              ?ticks n ρ K1  ?dead n ρ K2 by blast
      moreover have ((¬ ?ticks n ρ K1)  (?kills (Suc n) ρ))  ?kills n ρ
        using dual_order.antisym not_less_eq_eq by blast
      ultimately have ?kills n ρ  ?ticks n ρ K1  ?dead n ρ K2 by blast
      hence ?kills n ρ using le_trans by blast
    } thus ({ρ. ¬ ?ticks n ρ K1}   {ρ. ?kills (Suc n) ρ})
                  ({ρ. ?ticks n ρ K1}  {ρ. ?dead n ρ K2})
           {ρ. ?kills n ρ} by blast
  qed
  also have ... = {ρ. ¬ ?ticks n ρ K1}  {ρ. ?kills (Suc n) ρ}
                  {ρ. ?ticks n ρ K1}  {ρ. ?dead n ρ K2}  {ρ. ?kills (Suc n) ρ}
    using Collect_cong Collect_disj_eq by auto
  also have ... =  K1 ¬⇑ n prim   K1 kills K2TESL≥ Suc n
                   K1  n prim   K2 ¬⇑  n prim
                   K1 kills K2TESL≥ Suc n by simp
  finally show ?thesis by blast
qed

text ‹
  The stepwise interpretation of a TESL formula is the intersection of the
  interpretation of its atomic components.
›
fun TESL_interpretation_stepwise
  ::::linordered_field TESL_formula  nat   run set›
  (⟦⟦ _ ⟧⟧TESL⇗≥ _)
where
  ⟦⟦ [] ⟧⟧TESLn = {ρ. True}
| ⟦⟦ φ # Φ ⟧⟧TESLn =  φTESLn  ⟦⟦ Φ ⟧⟧TESLn

lemma TESL_interpretation_stepwise_fixpoint:
  ⟦⟦ Φ ⟧⟧TESLn =  ((λφ.  φTESLn) ` set Φ)
by (induction Φ, simp, auto)

text ‹
  The global interpretation of a TESL formula is its interpretation starting
  at the first instant.
›
lemma TESL_interpretation_stepwise_zero:
   φ TESL =  φTESL0
by (induction φ, simp+)

lemma TESL_interpretation_stepwise_zero':
  ⟦⟦ Φ ⟧⟧TESL = ⟦⟦ Φ ⟧⟧TESL0
by (induction Φ, simp, simp add: TESL_interpretation_stepwise_zero)

lemma TESL_interpretation_stepwise_cons_morph:
   φTESLn  ⟦⟦ Φ ⟧⟧TESLn = ⟦⟦ φ # Φ ⟧⟧TESLn
by auto

theorem TESL_interp_stepwise_composition:
  shows ⟦⟦ Φ1 @ Φ2 ⟧⟧TESLn = ⟦⟦ Φ1 ⟧⟧TESLn  ⟦⟦ Φ2 ⟧⟧TESLn
by (induction Φ1, simp, auto)

section ‹Interpretation of configurations›

text ‹
  The interpretation of a configuration of the operational semantics abstract 
  machine is the intersection of:
   the interpretation of its context (the past),
   the interpretation of its present from the current instant,
   the interpretation of its future from the next instant.
›
fun HeronConf_interpretation
  ::::linordered_field config   run set›          ( _ config 71)
where
   Γ, n  Ψ  Φ config = ⟦⟦ Γ ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESLn  ⟦⟦ Φ ⟧⟧TESL≥ Suc n

lemma HeronConf_interp_composition:
    Γ1, n  Ψ1  Φ1 config   Γ2, n  Ψ2  Φ2 config
     =  (Γ1 @ Γ2), n  (Ψ1 @ Ψ2)  (Φ1 @ Φ2) config
  using TESL_interp_stepwise_composition symrun_interp_expansion
by (simp add: TESL_interp_stepwise_composition
              symrun_interp_expansion inf_assoc inf_left_commute)

text ‹
  When there are no remaining constraints on the present, the interpretation of
  a configuration is the same as the configuration at the next instant of its future.
  This corresponds to the introduction rule of the operational semantics.
›
lemma HeronConf_interp_stepwise_instant_cases:
    Γ, n  []  Φ config =  Γ, Suc n  Φ  [] config
proof -
  have  Γ, n  []  Φ config = ⟦⟦ Γ ⟧⟧prim  ⟦⟦ [] ⟧⟧TESLn  ⟦⟦ Φ ⟧⟧TESL≥ Suc n
    by simp
  moreover have  Γ, Suc n  Φ  [] config
                  = ⟦⟦ Γ ⟧⟧prim  ⟦⟦ Φ ⟧⟧TESL≥ Suc n  ⟦⟦ [] ⟧⟧TESL≥ Suc n
    by simp
  moreover have ⟦⟦ Γ ⟧⟧prim  ⟦⟦ [] ⟧⟧TESLn  ⟦⟦ Φ ⟧⟧TESL≥ Suc n
                 = ⟦⟦ Γ ⟧⟧prim  ⟦⟦ Φ ⟧⟧TESL≥ Suc n  ⟦⟦ [] ⟧⟧TESL≥ Suc n
    by simp
  ultimately show ?thesis by blast
qed

text ‹
  The following lemmas use the unfolding properties of the stepwise denotational 
  semantics to give rewriting rules for the interpretation of configurations that
  match the elimination rules of the operational semantics.
›
lemma HeronConf_interp_stepwise_sporadicon_cases:
    Γ, n  ((K1 sporadic τ on K2) # Ψ)  Φ config
    =  Γ, n  Ψ  ((K1 sporadic τ on K2) # Φ) config
      ((K1  n) # (K2  n @ τ) # Γ), n  Ψ  Φ config
proof -
  have  Γ, n  (K1 sporadic τ on K2) # Ψ  Φ config
        = ⟦⟦ Γ ⟧⟧prim  ⟦⟦ (K1 sporadic τ on K2) # Ψ ⟧⟧TESLn  ⟦⟦ Φ ⟧⟧TESL≥ Suc n
    by simp
  moreover have  Γ, n  Ψ  ((K1 sporadic τ on K2) # Φ) config
                 =  ⟦⟦ Γ ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESLn
                   ⟦⟦ (K1 sporadic τ on K2) # Φ ⟧⟧TESL≥ Suc n
    by simp
  moreover have  ((K1  n) # (K2  n @ τ) # Γ), n  Ψ  Φ config
                 =  ⟦⟦ ((K1  n) # (K2  n @ τ) # Γ) ⟧⟧prim
                   ⟦⟦ Ψ ⟧⟧TESLn  ⟦⟦ Φ ⟧⟧TESL≥ Suc n
    by simp
  ultimately show ?thesis
  proof -
    have ( K1  n prim   K2  n @ τ prim   K1 sporadic τ on K2TESL≥ Suc n)
             (⟦⟦ Γ ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESLn)
          =  K1 sporadic τ on K2TESLn  (⟦⟦ Ψ ⟧⟧TESLn  ⟦⟦ Γ ⟧⟧prim)
      using TESL_interp_stepwise_sporadicon_coind_unfold by blast
    hence ⟦⟦ ((K1  n) # (K2  n @ τ) # Γ) ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESLn
             ⟦⟦ Γ ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESLn   K1 sporadic τ on K2TESL≥ Suc n
           = ⟦⟦ (K1 sporadic τ on K2) # Ψ ⟧⟧TESLn  ⟦⟦ Γ ⟧⟧prim by auto
    thus ?thesis by auto
  qed
qed

lemma HeronConf_interp_stepwise_tagrel_cases:
    Γ, n  ((time-relation K1, K2  R) # Ψ)  Φ config
    =  ((τvar(K1, n), τvar(K2, n)  R) # Γ), n
         Ψ  ((time-relation K1, K2  R) # Φ) config
proof -
  have  Γ, n  (time-relation K1, K2  R) # Ψ  Φ config
        = ⟦⟦ Γ ⟧⟧prim  ⟦⟦ (time-relation K1, K2  R) # Ψ ⟧⟧TESLn
         ⟦⟦ Φ ⟧⟧TESL≥ Suc n by simp
  moreover have  ((τvar(K1, n), τvar(K2, n)  R) # Γ), n
                   Ψ  ((time-relation K1, K2  R) # Φ) config
                 = ⟦⟦ (τvar(K1, n), τvar(K2, n)  R) # Γ ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESLn
                  ⟦⟦ (time-relation K1, K2  R) # Φ ⟧⟧TESL≥ Suc n
    by simp
  ultimately show ?thesis
  proof -
    have  τvar(K1, n), τvar(K2, n)  R prim
            time-relation K1, K2  RTESL≥ Suc n
           ⟦⟦ Ψ ⟧⟧TESLn = ⟦⟦ (time-relation K1, K2  R) # Ψ ⟧⟧TESLn
      using TESL_interp_stepwise_tagrel_coind_unfold
            TESL_interpretation_stepwise_cons_morph by blast
    thus ?thesis by auto
  qed
qed

lemma HeronConf_interp_stepwise_implies_cases:
    Γ, n  ((K1 implies K2) # Ψ)  Φ config
      =  ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies K2) # Φ) config
        ((K1  n) # (K2  n) # Γ), n  Ψ  ((K1 implies K2) # Φ) config
proof -
  have  Γ, n  (K1 implies K2) # Ψ  Φ config
        = ⟦⟦ Γ ⟧⟧prim  ⟦⟦ (K1 implies K2) # Ψ ⟧⟧TESLn  ⟦⟦ Φ ⟧⟧TESL≥ Suc n
    by simp
  moreover have  ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies K2) # Φ) config
                = ⟦⟦ (K1 ¬⇑ n) # Γ ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESLn
                 ⟦⟦ (K1 implies K2) # Φ ⟧⟧TESL≥ Suc n by simp
  moreover have  ((K1  n) # (K2  n) # Γ), n  Ψ  ((K1 implies K2) # Φ) config
                =  ⟦⟦ ((K1  n) # (K2  n) # Γ) ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESLn
                  ⟦⟦ (K1 implies K2) # Φ ⟧⟧TESL≥ Suc n by simp
  ultimately show ?thesis
  proof -
    have f1: ( K1 ¬⇑ n prim   K1  n prim   K2  n prim)
                  K1 implies K2TESL≥ Suc n  (⟦⟦ Ψ ⟧⟧TESLn
                 ⟦⟦ Φ ⟧⟧TESL≥ Suc n)
              = ⟦⟦ (K1 implies K2) # Ψ ⟧⟧TESLn  ⟦⟦ Φ ⟧⟧TESL≥ Suc n
      using TESL_interp_stepwise_implies_coind_unfold
            TESL_interpretation_stepwise_cons_morph by blast
    have  K1 ¬⇑ n prim  ⟦⟦ Γ ⟧⟧prim   K1  n prim  ⟦⟦ (K2  n) # Γ ⟧⟧prim
         = ( K1 ¬⇑ n prim   K1  n prim   K2  n prim)  ⟦⟦ Γ ⟧⟧prim
      by force
    hence  Γ, n  ((K1 implies K2) # Ψ)  Φ config
      = ( K1 ¬⇑ n prim  ⟦⟦ Γ ⟧⟧prim   K1  n prim  ⟦⟦ (K2  n) # Γ ⟧⟧prim)
         (⟦⟦ Ψ ⟧⟧TESLn  ⟦⟦ (K1 implies K2) # Φ ⟧⟧TESL≥ Suc n)
      using f1 by (simp add: inf_left_commute inf_assoc)
    thus ?thesis by (simp add: Int_Un_distrib2 inf_assoc)
  qed
qed

lemma HeronConf_interp_stepwise_implies_not_cases:
    Γ, n  ((K1 implies not K2) # Ψ)  Φ config
      =  ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ) config
        ((K1  n) # (K2 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ) config
proof -
  have  Γ, n  (K1 implies not K2) # Ψ  Φ config
        = ⟦⟦ Γ ⟧⟧prim  ⟦⟦ (K1 implies not K2) # Ψ ⟧⟧TESLn  ⟦⟦ Φ ⟧⟧TESL≥ Suc n
    by simp
  moreover have  ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ) config
                  = ⟦⟦ (K1 ¬⇑ n) # Γ ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESLn
                   ⟦⟦ (K1 implies not K2) # Φ ⟧⟧TESL≥ Suc n by simp
  moreover have  ((K1  n) # (K2 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ) config
                  = ⟦⟦ ((K1  n) # (K2 ¬⇑ n) # Γ) ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESLn
                   ⟦⟦ (K1 implies not K2) # Φ ⟧⟧TESL≥ Suc n by simp
  ultimately show ?thesis
  proof -
    have f1: ( K1 ¬⇑ n prim   K1  n prim   K2 ¬⇑ n prim)
                K1 implies not K2TESL≥ Suc n
               (⟦⟦ Ψ ⟧⟧TESLn  ⟦⟦ Φ ⟧⟧TESL≥ Suc n)
              = ⟦⟦ (K1 implies not K2) # Ψ ⟧⟧TESLn  ⟦⟦ Φ ⟧⟧TESL≥ Suc n
      using TESL_interp_stepwise_implies_not_coind_unfold
            TESL_interpretation_stepwise_cons_morph by blast
    have  K1 ¬⇑ n prim  ⟦⟦ Γ ⟧⟧prim   K1  n prim  ⟦⟦ (K2 ¬⇑ n) # Γ ⟧⟧prim
           = ( K1 ¬⇑ n prim   K1  n prim   K2 ¬⇑ n prim)  ⟦⟦ Γ ⟧⟧prim
      by force
    then have  Γ, n  ((K1 implies not K2) # Ψ)  Φ config
                 = ( K1 ¬⇑ n prim  ⟦⟦ Γ ⟧⟧prim   K1  n prim
                     ⟦⟦ (K2 ¬⇑ n) # Γ ⟧⟧prim)  (⟦⟦ Ψ ⟧⟧TESLn
                     ⟦⟦ (K1 implies not K2) # Φ ⟧⟧TESL≥ Suc n)
      using f1 by (simp add: inf_left_commute inf_assoc)
    thus ?thesis by (simp add: Int_Un_distrib2 inf_assoc)
  qed
qed

lemma HeronConf_interp_stepwise_timedelayed_cases:
   Γ, n  ((K1 time-delayed by δτ on K2 implies K3) # Ψ)  Φ config
    =  ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ) config
      ((K1  n) # (K2 @ n  δτ  K3) # Γ), n
         Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ) config
proof -
  have 1: Γ, n  (K1 time-delayed by δτ on K2 implies K3) # Ψ  Φ config
         = ⟦⟦ Γ ⟧⟧prim  ⟦⟦ (K1 time-delayed by δτ on K2 implies K3) # Ψ ⟧⟧TESLn
           ⟦⟦ Φ ⟧⟧TESL≥ Suc n by simp
  moreover have  ((K1 ¬⇑ n) # Γ), n
                   Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ) config
                 = ⟦⟦ (K1 ¬⇑ n) # Γ ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESLn
                   ⟦⟦ (K1 time-delayed by δτ on K2 implies K3) # Φ ⟧⟧TESL≥ Suc n
    by simp
  moreover have  ((K1  n) # (K2 @ n  δτ  K3) # Γ), n
                   Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ) config
                 = ⟦⟦ (K1  n) # (K2 @ n  δτ  K3) # Γ ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESLn
                   ⟦⟦ (K1 time-delayed by δτ on K2 implies K3) # Φ ⟧⟧TESL≥ Suc n
    by simp
  ultimately show ?thesis
  proof -
    have  Γ, n  (K1 time-delayed by δτ on K2 implies K3) # Ψ  Φ config
      = ⟦⟦ Γ ⟧⟧prim  (⟦⟦ (K1 time-delayed by δτ on K2 implies K3) # Ψ ⟧⟧TESLn
         ⟦⟦ Φ ⟧⟧TESL≥ Suc n)
      using 1 by blast
    hence  Γ, n  (K1 time-delayed by δτ on K2 implies K3) # Ψ  Φ config
          = ( K1 ¬⇑ n prim   K1  n prim   K2 @ n  δτ  K3 prim)
             (⟦⟦ Γ ⟧⟧prim  (⟦⟦ Ψ ⟧⟧TESLn
             ⟦⟦ (K1 time-delayed by δτ on K2 implies K3) # Φ ⟧⟧TESL≥ Suc n))
      using TESL_interpretation_stepwise_cons_morph
            TESL_interp_stepwise_timedelayed_coind_unfold
    proof -
      have ⟦⟦ (K1 time-delayed by δτ on K2 implies K3) # Ψ ⟧⟧TESLn
            = ( K1 ¬⇑ n prim   K1  n prim   K2 @ n  δτ  K3 prim)
              K1 time-delayed by δτ on K2 implies K3TESL≥ Suc n  ⟦⟦ Ψ ⟧⟧TESLn
        using TESL_interp_stepwise_timedelayed_coind_unfold
              TESL_interpretation_stepwise_cons_morph by blast
      then show ?thesis
        by (simp add: Int_assoc Int_left_commute)
    qed
    then show ?thesis by (simp add: inf_assoc inf_sup_distrib2)
  qed
qed

lemma HeronConf_interp_stepwise_weakly_precedes_cases:
    Γ, n  ((K1 weakly precedes K2) # Ψ)  Φ config
    =  ((# K2 n, # K1 n  (λ(x,y). xy)) # Γ), n
       Ψ  ((K1 weakly precedes K2) # Φ) config
proof -
  have  Γ, n  (K1 weakly precedes K2) # Ψ  Φ config
        = ⟦⟦ Γ ⟧⟧prim  ⟦⟦ (K1 weakly precedes K2) # Ψ ⟧⟧TESLn
           ⟦⟦ Φ ⟧⟧TESL≥ Suc n by simp
  moreover have  ((# K2 n, # K1 n  (λ(x,y). xy)) # Γ), n
                   Ψ  ((K1 weakly precedes K2) # Φ) config
                = ⟦⟦ (# K2 n, # K1 n  (λ(x,y). xy)) # Γ ⟧⟧prim
                 ⟦⟦ Ψ ⟧⟧TESLn  ⟦⟦ (K1 weakly precedes K2) # Φ ⟧⟧TESL≥ Suc n
    by simp
  ultimately show ?thesis
  proof -
    have  # K2 n, # K1 n  (λ(x,y). xy) prim
              K1 weakly precedes K2TESL≥ Suc n  ⟦⟦ Ψ ⟧⟧TESLn
          = ⟦⟦ (K1 weakly precedes K2) # Ψ ⟧⟧TESLn
      using TESL_interp_stepwise_weakly_precedes_coind_unfold
            TESL_interpretation_stepwise_cons_morph by blast
    thus ?thesis by auto
  qed
qed

lemma HeronConf_interp_stepwise_strictly_precedes_cases:
    Γ, n  ((K1 strictly precedes K2) # Ψ)  Φ config
    =  ((# K2 n, #< K1 n  (λ(x,y). xy)) # Γ), n
       Ψ  ((K1 strictly precedes K2) # Φ) config
proof -
  have  Γ, n  (K1 strictly precedes K2) # Ψ  Φ config
        = ⟦⟦ Γ ⟧⟧prim  ⟦⟦ (K1 strictly precedes K2) # Ψ ⟧⟧TESLn
           ⟦⟦ Φ ⟧⟧TESL≥ Suc n by simp
  moreover have  ((# K2 n, #< K1 n  (λ(x,y). xy)) # Γ), n
                   Ψ  ((K1 strictly precedes K2) # Φ) config
                = ⟦⟦ (# K2 n, #< K1 n  (λ(x,y). xy)) # Γ ⟧⟧prim
                 ⟦⟦ Ψ ⟧⟧TESLn
                 ⟦⟦ (K1 strictly precedes K2) # Φ ⟧⟧TESL≥ Suc n by simp
  ultimately show ?thesis
  proof -
    have  # K2 n, #< K1 n  (λ(x,y). xy) prim
              K1 strictly precedes K2TESL≥ Suc n  ⟦⟦ Ψ ⟧⟧TESLn
          = ⟦⟦ (K1 strictly precedes K2) # Ψ ⟧⟧TESLn
      using TESL_interp_stepwise_strictly_precedes_coind_unfold
            TESL_interpretation_stepwise_cons_morph by blast
    thus ?thesis by auto
  qed
qed

lemma HeronConf_interp_stepwise_kills_cases:
    Γ, n  ((K1 kills K2) # Ψ)  Φ config
    =  ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 kills K2) # Φ) config
      ((K1  n) # (K2 ¬⇑  n) # Γ), n  Ψ  ((K1 kills K2) # Φ) config
proof -
  have  Γ, n  ((K1 kills K2) # Ψ)  Φ config
        = ⟦⟦ Γ ⟧⟧prim  ⟦⟦ (K1 kills K2) # Ψ ⟧⟧TESLn  ⟦⟦ Φ ⟧⟧TESL≥ Suc n
    by simp
  moreover have  ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 kills K2) # Φ) config
                = ⟦⟦ (K1 ¬⇑ n) # Γ ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESLn
                   ⟦⟦ (K1 kills K2) # Φ ⟧⟧TESL≥ Suc n by simp
  moreover have  ((K1  n) # (K2 ¬⇑  n) # Γ), n  Ψ  ((K1 kills K2) # Φ) config
                = ⟦⟦ (K1  n) # (K2 ¬⇑  n) # Γ ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESLn
                   ⟦⟦ (K1 kills K2) # Φ ⟧⟧TESL≥ Suc n by simp
  ultimately show ?thesis
    proof -
      have ⟦⟦ (K1 kills K2) # Ψ ⟧⟧TESLn
            = ( (K1 ¬⇑ n) prim   (K1  n) prim   (K2 ¬⇑  n) prim)
                (K1 kills K2)TESL≥ Suc n  ⟦⟦ Ψ ⟧⟧TESLn
        using TESL_interp_stepwise_kills_coind_unfold
              TESL_interpretation_stepwise_cons_morph by blast
      thus ?thesis by auto
    qed
qed

end

Theory Hygge_Theory

chapter‹Main Theorems›

theory Hygge_Theory
imports
  Corecursive_Prop

begin
text ‹
  Using the properties we have shown about the interpretation of configurations 
  and the stepwise unfolding of the denotational semantics, we can now prove 
  several important results about the construction of runs from a specification.
›
section ‹Initial configuration›

text ‹
  The denotational semantics of a specification @{term Ψ} is the interpretation 
  at the first instant of a configuration which has @{term Ψ} as its present.
  This means that we can start to build a run that satisfies a specification by 
  starting from this configuration.
›
theorem solve_start:
  shows ⟦⟦ Ψ ⟧⟧TESL =  [], 0  Ψ  [] config
  proof -
    have ⟦⟦ Ψ ⟧⟧TESL = ⟦⟦ Ψ ⟧⟧TESL0
    by (simp add: TESL_interpretation_stepwise_zero')
    moreover have  [], 0  Ψ  [] config =
                      ⟦⟦ [] ⟧⟧prim  ⟦⟦ Ψ ⟧⟧TESL0  ⟦⟦ [] ⟧⟧TESL≥ Suc 0
    by simp
    ultimately show ?thesis by auto
  qed

section ‹Soundness›
text ‹
  The interpretation of a configuration @{term 𝒮2} that is a refinement of a
  configuration @{term 𝒮1} is contained in the interpretation of @{term 𝒮1}.
  This means that by making successive choices in building the instants of a run,
  we preserve the soundness of the constructed run with regard to the original 
  specification.
›
lemma sound_reduction:
  assumes (Γ1, n1  Ψ1  Φ1)    (Γ2, n2  Ψ2  Φ2)
  shows ⟦⟦ Γ1 ⟧⟧prim  ⟦⟦ Ψ1 ⟧⟧TESLn1  ⟦⟦ Φ1 ⟧⟧TESL≥ Suc n1
            ⟦⟦ Γ2 ⟧⟧prim  ⟦⟦ Ψ2 ⟧⟧TESLn2  ⟦⟦ Φ2 ⟧⟧TESL≥ Suc n2 (is ?P)
proof -
  from assms consider
    (a) (Γ1, n1  Ψ1  Φ1)  i  (Γ2, n2  Ψ2  Φ2)
  | (b) (Γ1, n1  Ψ1  Φ1)  e  (Γ2, n2  Ψ2  Φ2)
    using operational_semantics_step.simps by blast
  thus ?thesis
  proof (cases)
    case a
      thus ?thesis by (simp add: operational_semantics_intro.simps)
  next
    case b thus ?thesis
    proof (rule operational_semantics_elim.cases)
      fix  Γ n K1 τ K2 Ψ Φ
      assume (Γ1, n1  Ψ1  Φ1) = (Γ, n  (K1 sporadic τ on K2) # Ψ  Φ)
      and (Γ2, n2  Ψ2  Φ2) = (Γ, n  Ψ  ((K1 sporadic τ on K2) # Φ))
      thus ?P using HeronConf_interp_stepwise_sporadicon_cases
                    HeronConf_interpretation.simps by blast
    next
      fix  Γ n K1 τ K2 Ψ Φ
      assume (Γ1, n1  Ψ1  Φ1) = (Γ, n  (K1 sporadic τ on K2) # Ψ  Φ)
      and (Γ2, n2  Ψ2  Φ2) = (((K1  n) # (K2  n @ τ) # Γ), n  Ψ  Φ)
      thus ?P using HeronConf_interp_stepwise_sporadicon_cases
                    HeronConf_interpretation.simps by blast
    next
      fix Γ n K1 K2 R Ψ Φ
      assume (Γ1, n1  Ψ1  Φ1) = (Γ, n  (time-relation K1, K2  R) # Ψ  Φ)
      and (Γ2, n2  Ψ2  Φ2) = (((τvar (K1, n), τvar (K2, n)  R) # Γ), n
                                       Ψ  ((time-relation K1, K2  R) # Φ))
      thus ?P using HeronConf_interp_stepwise_tagrel_cases
                    HeronConf_interpretation.simps by blast
    next
      fix Γ n K1 K2 Ψ Φ
      assume (Γ1, n1  Ψ1  Φ1) = (Γ, n  (K1 implies K2) # Ψ  Φ)
      and (Γ2, n2  Ψ2  Φ2) = (((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies K2) # Φ))
      thus ?P using HeronConf_interp_stepwise_implies_cases
                    HeronConf_interpretation.simps by blast
    next
      fix Γ n K1 K2 Ψ Φ
      assume (Γ1, n1  Ψ1  Φ1) = (Γ, n  ((K1 implies K2) # Ψ)  Φ)
      and (Γ2, n2  Ψ2  Φ2) = (((K1  n) # (K2  n) # Γ), n
                                    Ψ  ((K1 implies K2) # Φ))
      thus ?P using HeronConf_interp_stepwise_implies_cases
                    HeronConf_interpretation.simps by blast
    next
      fix Γ n K1 K2 Ψ Φ
      assume (Γ1, n1  Ψ1  Φ1) = (Γ, n  ((K1 implies not K2) # Ψ)  Φ)
      and (Γ2, n2  Ψ2  Φ2) = (((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ))
      thus ?P using HeronConf_interp_stepwise_implies_not_cases
                    HeronConf_interpretation.simps by blast
    next
      fix Γ n K1 K2 Ψ Φ
      assume (Γ1, n1  Ψ1  Φ1) = (Γ, n  ((K1 implies not K2) # Ψ)  Φ)
      and (Γ2, n2  Ψ2  Φ2) = (((K1  n) # (K2 ¬⇑ n) # Γ), n
                                    Ψ  ((K1 implies not K2) # Φ))
      thus ?P using HeronConf_interp_stepwise_implies_not_cases
                    HeronConf_interpretation.simps by blast
    next
      fix Γ n K1 δτ K2 K3 Ψ Φ
      assume (Γ1, n1  Ψ1  Φ1) =
                (Γ, n  ((K1 time-delayed by δτ on K2 implies K3) # Ψ)  Φ)
      and (Γ2, n2  Ψ2  Φ2) =
            (((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ))
      thus ?P using HeronConf_interp_stepwise_timedelayed_cases
                    HeronConf_interpretation.simps by blast
    next
      fix Γ n K1 δτ K2 K3 Ψ Φ
      assume (Γ1, n1  Ψ1  Φ1) =
               (Γ, n  ((K1 time-delayed by δτ on K2 implies K3) # Ψ)  Φ)
      and (Γ2, n2  Ψ2  Φ2)
            = (((K1  n) # (K2 @ n  δτ  K3) # Γ), n
                 Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ))
      thus ?P using HeronConf_interp_stepwise_timedelayed_cases
                    HeronConf_interpretation.simps by blast
    next
      fix Γ n K1 K2 Ψ Φ
      assume (Γ1, n1  Ψ1  Φ1) = (Γ, n  ((K1 weakly precedes K2) # Ψ)  Φ)
      and (Γ2, n2  Ψ2  Φ2) = (((# K2 n, # K1 n  (λ(x, y). x  y)) # Γ), n
                                   Ψ  ((K1 weakly precedes K2) # Φ))
      thus ?P using HeronConf_interp_stepwise_weakly_precedes_cases
                    HeronConf_interpretation.simps by blast
    next
      fix Γ n K1 K2 Ψ Φ
      assume (Γ1, n1  Ψ1  Φ1) = (Γ, n  ((K1 strictly precedes K2) # Ψ)  Φ)
      and (Γ2, n2  Ψ2  Φ2) = (((# K2 n, #< K1 n  (λ(x, y). x  y)) # Γ), n
                                   Ψ  ((K1 strictly precedes K2) # Φ))
      thus ?P using HeronConf_interp_stepwise_strictly_precedes_cases
                    HeronConf_interpretation.simps by blast
    next
      fix Γ n K1 K2 Ψ Φ
      assume (Γ1, n1  Ψ1  Φ1) = (Γ, n  ((K1 kills K2) # Ψ)  Φ)
      and (Γ2, n2  Ψ2  Φ2) = (((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 kills K2) # Φ))
      thus ?P using HeronConf_interp_stepwise_kills_cases
                    HeronConf_interpretation.simps by blast
    next
      fix Γ n K1 K2 Ψ Φ
      assume (Γ1, n1  Ψ1  Φ1) = (Γ, n  ((K1 kills K2) # Ψ)  Φ)
      and (Γ2, n2  Ψ2  Φ2) =
            (((K1  n) # (K2 ¬⇑  n) # Γ), n  Ψ  ((K1 kills K2) # Φ))
      thus ?P using HeronConf_interp_stepwise_kills_cases
                    HeronConf_interpretation.simps by blast
    qed
  qed
qed

inductive_cases step_elim:𝒮1  𝒮2

lemma sound_reduction':
  assumes 𝒮1  𝒮2
  shows  𝒮1 config   𝒮2 config
proof -
  have s1 s2. ( s2 config   s1 config)  ¬(s1  s2)
    using sound_reduction by fastforce
  thus ?thesis using assms by blast
qed

lemma sound_reduction_generalized:
  assumes 𝒮1k 𝒮2
    shows  𝒮1 config   𝒮2 config
proof -
  from assms show ?thesis
  proof (induction k arbitrary: 𝒮2)
    case 0
      hence *: 𝒮10 𝒮2  𝒮1 = 𝒮2 by auto
      moreover have 𝒮1 = 𝒮2 using * "0.prems" by linarith
      ultimately show ?case by auto
  next
    case (Suc k)
      thus ?case
      proof -
        fix k :: nat
        assume ff: 𝒮1Suc k 𝒮2
        assume hi: 𝒮2. 𝒮1k 𝒮2   𝒮2 config   𝒮1 config
        obtain 𝒮n where red_decomp: (𝒮1k 𝒮n)  (𝒮n  𝒮2) using ff by auto
        hence  𝒮1 config   𝒮n config using hi by simp
        also have  𝒮n config   𝒮2 config by (simp add: red_decomp sound_reduction')
        ultimately show  𝒮1 config   𝒮2 config by simp
      qed
  qed
qed

text ‹
  From the initial configuration, a configuration @{term 𝒮} obtained after any 
  number @{term k} of reduction steps denotes runs from the 
  initial specification @{term Ψ}.
›
theorem soundness:
  assumes ([], 0  Ψ  [])k 𝒮
    shows ⟦⟦ Ψ ⟧⟧TESL   𝒮 config
  using assms sound_reduction_generalized solve_start by blast

section ‹Completeness›

text ‹
  We will now show that any run that satisfies a specification can be derived
  from the initial configuration, at any number of steps.

  We start by proving that any run that is denoted by a configuration @{term 𝒮}
  is necessarily denoted by at least one of the configurations that can be reached
  from @{term 𝒮}.
›
lemma complete_direct_successors:
  shows  Γ, n  Ψ  Φ config  (X𝒞next (Γ, n  Ψ  Φ).  X config)
  proof (induct Ψ)
    case Nil
    show ?case
      using HeronConf_interp_stepwise_instant_cases operational_semantics_step.simps
            operational_semantics_intro.instant_i
      by fastforce
  next
    case (Cons ψ Ψ)  thus ?case
      proof (cases ψ)
        case (SporadicOn K1 τ K2) thus ?thesis 
          using HeronConf_interp_stepwise_sporadicon_cases
                                      [of Γ n K1 τ K2 Ψ Φ]
                Cnext_solve_sporadicon[of Γ n Ψ K1 τ K2 Φ] by blast
      next
        case (TagRelation K1 K2 R) thus ?thesis
          using HeronConf_interp_stepwise_tagrel_cases
                                  [of Γ n K1 K2 R Ψ Φ]
                Cnext_solve_tagrel[of K1 n K2 R Γ Ψ Φ] by blast
      next
        case (Implies K1 K2) thus ?thesis
          using HeronConf_interp_stepwise_implies_cases
                                   [of Γ n K1 K2 Ψ Φ]
                Cnext_solve_implies[of K1 n Γ Ψ K2 Φ] by blast
      next
        case (ImpliesNot K1 K2) thus ?thesis
          using HeronConf_interp_stepwise_implies_not_cases
                                       [of Γ n K1 K2 Ψ Φ]
                Cnext_solve_implies_not[of K1 n Γ Ψ K2 Φ] by blast
      next
        case (TimeDelayedBy Kmast τ Kmeas Kslave) thus ?thesis
          using HeronConf_interp_stepwise_timedelayed_cases
                          [of Γ n Kmast τ Kmeas Kslave Ψ Φ]
                Cnext_solve_timedelayed
                          [of Kmast n Γ Ψ τ Kmeas Kslave Φ] by blast
      next
        case (WeaklyPrecedes K1 K2) thus ?thesis
          using HeronConf_interp_stepwise_weakly_precedes_cases
                                           [of Γ n K1 K2 Ψ Φ]
                Cnext_solve_weakly_precedes[of K2 n K1 Γ Ψ  Φ]
          by blast
      next
        case (StrictlyPrecedes K1 K2)  thus ?thesis
          using HeronConf_interp_stepwise_strictly_precedes_cases
                                             [of Γ n K1 K2 Ψ Φ]
                Cnext_solve_strictly_precedes[of K2 n K1 Γ Ψ  Φ]
          by blast
      next
        case (Kills K1 K2) thus ?thesis
          using HeronConf_interp_stepwise_kills_cases[of Γ n K1 K2 Ψ Φ]
                Cnext_solve_kills[of K1 n Γ Ψ K2 Φ] by blast
      qed
  qed

lemma complete_direct_successors':
  shows  𝒮 config  (X𝒞next 𝒮.  X config)
proof -
  from HeronConf_interpretation.cases obtain Γ n Ψ Φ
    where 𝒮 = (Γ, n  Ψ  Φ) by blast
  with complete_direct_successors[of Γ n Ψ Φ] show ?thesis by simp
qed

text ‹
  Therefore, if a run belongs to a configuration, it necessarily belongs to a
  configuration derived from it.
›
lemma branch_existence:
  assumes ρ   𝒮1 config
  shows 𝒮2. (𝒮1  𝒮2)  (ρ   𝒮2 config)
proof -
  from assms complete_direct_successors' have ρ  (X𝒞next 𝒮1.  X config) by blast
  hence s𝒞next 𝒮1. ρ   s config by simp
  thus ?thesis by blast
qed

lemma branch_existence':
  assumes ρ   𝒮1 config
  shows 𝒮2. (𝒮1k 𝒮2)  (ρ   𝒮2 config)
proof (induct k)
  case 0
    thus ?case by (simp add: assms)
next
  case (Suc k)
    thus ?case
      using branch_existence relpowp_Suc_I[of k ‹operational_semantics_step›]
    by blast
qed

text‹
  Any run that belongs to the original specification @{term Ψ} has a corresponding 
  configuration @{term 𝒮} at any number @{term k} of reduction steps
  from the initial configuration. Therefore, any run that satisfies a specification
  can be derived from the initial configuration at any level of reduction.
›
theorem completeness:
  assumes ρ  ⟦⟦ Ψ ⟧⟧TESL
  shows 𝒮. (([], 0  Ψ  [])k  𝒮)
            ρ   𝒮 config
  using assms branch_existence' solve_start by blast

section ‹Progress›

text ‹
  Reduction steps do not guarantee that the construction of a run progresses in the
  sequence of instants. We need to show that it is always possible to reach the next 
  instant, and therefore any future instant, through a number of steps.
›
lemma instant_index_increase:
  assumes ρ   Γ, n  Ψ  Φ config
  shows   Γk Ψk Φk k. ((Γ, n  Ψ  Φ)k  (Γk, Suc n  Ψk  Φk))
                          ρ   Γk, Suc n  Ψk  Φk config
proof (insert assms, induct Ψ arbitrary: Γ Φ)
  case (Nil Γ Φ)
    then show ?case
    proof -
      have (Γ, n  []  Φ)1 (Γ, Suc n  Φ  [])
        using instant_i intro_part by fastforce
      moreover have  Γ, n  []  Φ config =  Γ, Suc n  Φ  [] config
        by auto
      moreover have ρ   Γ, Suc n  Φ  [] config
        using assms Nil.prems calculation(2) by blast
      ultimately show ?thesis by blast
    qed
next
  case (Cons ψ Ψ)
    then show ?case
    proof (induct ψ)
      case (SporadicOn K1 τ K2)
        have branches:  Γ, n  ((K1 sporadic τ on K2) # Ψ)  Φ config
                      =  Γ, n  Ψ  ((K1 sporadic τ on K2) # Φ) config
                        ((K1  n) # (K2  n @ τ) # Γ), n  Ψ  Φ config
          using HeronConf_interp_stepwise_sporadicon_cases by simp
        have br1: ρ   Γ, n  Ψ  ((K1 sporadic τ on K2) # Φ) config
                     Γk Ψk Φk k.
                      ((Γ, n  ((K1 sporadic τ on K2) # Ψ)  Φ)k (Γk, Suc n  Ψk  Φk))
                       ρ   Γk, Suc n  Ψk  Φk config
        proof -
          assume h1: ρ   Γ, n  Ψ  ((K1 sporadic τ on K2) # Φ) config
          hence Γk Ψk Φk k. ((Γ, n  Ψ  ((K1 sporadic τ on K2) # Φ))k (Γk, Suc n  Ψk  Φk))
                                  (ρ   Γk, Suc n  Ψk  Φk config)
            using h1 SporadicOn.prems by simp
          from this obtain Γk Ψk Φk k where
              fp:((Γ, n  Ψ  ((K1 sporadic τ on K2) # Φ))k (Γk, Suc n  Ψk  Φk))
                 ρ   Γk, Suc n  Ψk  Φk config by blast
          have
            (Γ, n  ((K1 sporadic τ on K2) # Ψ)  Φ)
               (Γ, n  Ψ  ((K1 sporadic τ on K2) # Φ))
            by (simp add: elims_part sporadic_on_e1)
          with fp relpowp_Suc_I2 have
            ((Γ, n  ((K1 sporadic τ on K2) # Ψ)  Φ)Suc k (Γk, Suc n  Ψk  Φk)) by auto
          thus ?thesis using fp by blast
        qed
        have br2: ρ   ((K1  n) # (K2  n @ τ) # Γ), n  Ψ  Φ config
                   Γk Ψk Φk k. ((Γ, n  ((K1 sporadic τ on K2) # Ψ)  Φ)k (Γk, Suc n  Ψk  Φk))
                             ρ   Γk, Suc n  Ψk  Φk config
        proof -
          assume h2: ρ   ((K1  n) # (K2  n @ τ) # Γ), n  Ψ  Φ config
          hence Γk Ψk Φk k. ((((K1  n) # (K2  n @ τ) # Γ), n  Ψ  Φ)k (Γk, Suc n  Ψk  Φk))
                              ρ   Γk, Suc n  Ψk  Φk config
            using h2 SporadicOn.prems by simp

            from this obtain Γk Ψk Φk k
            where fp:((((K1  n) # (K2  n @ τ) # Γ), n  Ψ  Φ)k (Γk, Suc n  Ψk  Φk))
              and rc:ρ   Γk, Suc n  Ψk  Φk config by blast
            have pc:(Γ, n  ((K1 sporadic τ on K2) # Ψ)  Φ)
               (((K1  n) # (K2  n @ τ) # Γ), n  Ψ  Φ)
            by (simp add: elims_part sporadic_on_e2)
            hence (Γ, n  (K1 sporadic τ on K2) # Ψ  Φ)Suc k (Γk, Suc n  Ψk  Φk)
                using fp relpowp_Suc_I2 by auto
            with rc show ?thesis by blast
        qed
        from branches SporadicOn.prems(2) have
          ρ   Γ, n  Ψ  ((K1 sporadic τ on K2) # Φ) config
               ((K1  n) # (K2  n @ τ) # Γ), n  Ψ  Φ config
          by simp
        with br1 br2 show ?case by blast
  next
    case (TagRelation K1 K2 R)
      have branches:  Γ, n  ((time-relation K1, K2  R) # Ψ)  Φ config
          =  ((τvar(K1, n), τvar(K2, n)  R) # Γ), n
               Ψ  ((time-relation K1, K2  R) # Φ) config
        using HeronConf_interp_stepwise_tagrel_cases by simp
      thus ?case
      proof -
        have Γk Ψk Φk k.
            ((((τvar(K1, n), τvar(K2, n)  R) # Γ), n
                 Ψ  ((time-relation K1, K2  R) # Φ))k (Γk, Suc n  Ψk  Φk))  ρ   Γk, Suc n  Ψk  Φk config
          using TagRelation.prems by simp

        from this obtain Γk Ψk Φk k
          where fp:((((τvar(K1, n), τvar(K2, n)  R) # Γ), n
                         Ψ  ((time-relation K1, K2  R) # Φ))k (Γk, Suc n  Ψk  Φk))
            and rc:ρ   Γk, Suc n  Ψk  Φk config by blast
        have pc:(Γ, n  ((time-relation K1, K2  R) # Ψ)  Φ)
             (((τvar (K1, n), τvar (K2, n)  R) # Γ), n
                   Ψ  ((time-relation K1, K2  R) # Φ))
          by (simp add: elims_part tagrel_e)
        hence (Γ, n  (time-relation K1, K2  R) # Ψ  Φ)Suc k (Γk, Suc n  Ψk  Φk)
          using fp relpowp_Suc_I2 by auto
        with rc show ?thesis by blast
      qed
  next
    case (Implies K1 K2)
      have branches:  Γ, n  ((K1 implies K2) # Ψ)  Φ config
          =  ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies K2) # Φ) config
            ((K1  n) # (K2  n) # Γ), n  Ψ  ((K1 implies K2) # Φ) config
        using HeronConf_interp_stepwise_implies_cases by simp
      moreover have br1: ρ   ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies K2) # Φ) config
                 Γk Ψk Φk k. ((Γ, n  ((K1 implies K2) # Ψ)  Φ)k (Γk, Suc n  Ψk  Φk))
                   ρ   Γk, Suc n  Ψk  Φk config
      proof -
        assume h1: ρ   ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies K2) # Φ) config
        then have Γk Ψk Φk k.
                    ((((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies K2) # Φ))k (Γk, Suc n  Ψk  Φk))
                   ρ   Γk, Suc n  Ψk  Φk config
          using h1 Implies.prems by simp
        from this obtain Γk Ψk Φk k where
          fp:((((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies K2) # Φ))k (Γk, Suc n  Ψk  Φk))
          and rc:ρ   Γk, Suc n  Ψk  Φk config by blast
        have pc:(Γ, n  (K1 implies K2) # Ψ  Φ)
                   (((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies K2) # Φ))
          by (simp add: elims_part implies_e1)
        hence (Γ, n  (K1 implies K2) # Ψ  Φ)Suc k (Γk, Suc n  Ψk  Φk)
          using fp relpowp_Suc_I2 by auto
        with rc show ?thesis by blast
      qed
      moreover have br2: ρ   ((K1  n) # (K2  n) # Γ), n
                                 Ψ  ((K1 implies K2) # Φ) config
                             Γk Ψk Φk k. ((Γ, n  ((K1 implies K2) # Ψ)  Φ)k (Γk, Suc n  Ψk  Φk))
                                   ρ   Γk, Suc n  Ψk  Φk config
      proof -
        assume h2: ρ   ((K1  n) # (K2  n) # Γ), n
                           Ψ  ((K1 implies K2) # Φ) config
        then have Γk Ψk Φk k. (
                        (((K1  n) # (K2  n) # Γ), n  Ψ  ((K1 implies K2) # Φ))k (Γk, Suc n  Ψk  Φk)
                    )  ρ   Γk, Suc n  Ψk  Φk config
          using h2 Implies.prems by simp
        from this obtain Γk Ψk Φk k where
            fp:(((K1  n) # (K2  n) # Γ), n  Ψ  ((K1 implies K2) # Φ))k (Γk, Suc n  Ψk  Φk)
        and rc:ρ   Γk, Suc n  Ψk  Φk config by blast
        have (Γ, n  ((K1 implies K2) # Ψ)  Φ)
               (((K1  n) # (K2  n) # Γ), n  Ψ  ((K1 implies K2) # Φ))
          by (simp add: elims_part implies_e2)
        hence (Γ, n  ((K1 implies K2) # Ψ)  Φ)Suc k (Γk, Suc n  Ψk  Φk)
          using fp relpowp_Suc_I2 by auto
        with rc show ?thesis by blast
      qed
      ultimately show ?case using Implies.prems(2) by blast
  next
    case (ImpliesNot K1 K2)
      have branches:  Γ, n  ((K1 implies not K2) # Ψ)  Φ config
          =  ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ) config
            ((K1  n) # (K2 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ) config
        using HeronConf_interp_stepwise_implies_not_cases by simp
      moreover have br1: ρ   ((K1 ¬⇑ n) # Γ), n
                             Ψ  ((K1 implies not K2) # Φ) config
                 Γk Ψk Φk k. ((Γ, n  ((K1 implies not K2) # Ψ)  Φ)k (Γk, Suc n  Ψk  Φk))
                   ρ   Γk, Suc n  Ψk  Φk config
      proof -
        assume h1: ρ   ((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ) config
        then have Γk Ψk Φk k.
                    ((((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ))k (Γk, Suc n  Ψk  Φk))
                   ρ   Γk, Suc n  Ψk  Φk config
          using h1 ImpliesNot.prems by simp
        from this obtain Γk Ψk Φk k where
          fp:((((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ))k (Γk, Suc n  Ψk  Φk))
          and rc:ρ   Γk, Suc n  Ψk  Φk config by blast
        have pc:(Γ, n  (K1 implies not K2) # Ψ  Φ)
                   (((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ))
          by (simp add: elims_part implies_not_e1)
        hence (Γ, n  (K1 implies not K2) # Ψ  Φ)Suc k (Γk, Suc n  Ψk  Φk)
          using fp relpowp_Suc_I2 by auto
        with rc show ?thesis by blast
      qed
      moreover have br2: ρ   ((K1  n) # (K2 ¬⇑ n) # Γ), n
                             Ψ  ((K1 implies not K2) # Φ) config
                             Γk Ψk Φk k. ((Γ, n  ((K1 implies not K2) # Ψ)  Φ)k (Γk, Suc n  Ψk  Φk))
                                   ρ   Γk, Suc n  Ψk  Φk config
      proof -
        assume h2: ρ   ((K1  n) # (K2 ¬⇑ n) # Γ), n
                           Ψ  ((K1 implies not K2) # Φ) config
        then have Γk Ψk Φk k. (
                     (((K1  n) # (K2 ¬⇑ n) # Γ), n
                        Ψ  ((K1 implies not K2) # Φ))k (Γk, Suc n  Ψk  Φk)
                    )  ρ   Γk, Suc n  Ψk  Φk config
          using h2 ImpliesNot.prems by simp
        from this obtain Γk Ψk Φk k where
            fp:(((K1  n) # (K2 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ))k (Γk, Suc n  Ψk  Φk)
        and rc:ρ   Γk, Suc n  Ψk  Φk config by blast
        have (Γ, n  ((K1 implies not K2) # Ψ)  Φ)
               (((K1  n) # (K2 ¬⇑ n) # Γ), n  Ψ  ((K1 implies not K2) # Φ))
          by (simp add: elims_part implies_not_e2)
        hence (Γ, n  ((K1 implies not K2) # Ψ)  Φ)Suc k (Γk, Suc n  Ψk  Φk)
          using fp relpowp_Suc_I2 by auto
        with rc show ?thesis by blast
      qed
      ultimately show ?case  using ImpliesNot.prems(2) by blast
  next
    case (TimeDelayedBy K1 δτ K2 K3)
      have branches:
         Γ, n  ((K1 time-delayed by δτ on K2 implies K3) # Ψ)  Φ config
          =  ((K1 ¬⇑ n) # Γ), n
               Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ) config
            ((K1  n) # (K2 @ n  δτ  K3) # Γ), n
               Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ) config
        using HeronConf_interp_stepwise_timedelayed_cases by simp
      moreover have br1:
        ρ   ((K1 ¬⇑ n) # Γ), n
               Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ) config
           Γk Ψk Φk k.
            ((Γ, n  ((K1 time-delayed by δτ on K2 implies K3) # Ψ)  Φ)k (Γk, Suc n  Ψk  Φk))
             ρ   Γk, Suc n  Ψk  Φk config
      proof -
        assume h1: ρ   ((K1 ¬⇑ n) # Γ), n
                          Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ) config
        then have Γk Ψk Φk k.
          ((((K1 ¬⇑ n) # Γ), n  Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ))k (Γk, Suc n  Ψk  Φk))
           ρ   Γk, Suc n  Ψk  Φk config
          using h1 TimeDelayedBy.prems by simp
        from this obtain Γk Ψk Φk k
          where fp:(((K1 ¬⇑ n) # Γ), n
                       Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ))k (Γk, Suc n  Ψk  Φk)
            and rc:ρ   Γk, Suc n  Ψk  Φk config by blast
        have (Γ, n  ((K1 time-delayed by δτ on K2 implies K3) # Ψ)  Φ)
               (((K1 ¬⇑ n) # Γ), n
                     Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ))
          by (simp add: elims_part timedelayed_e1)
        hence (Γ, n  ((K1 time-delayed by δτ on K2 implies K3) # Ψ)  Φ)Suc k (Γk, Suc n  Ψk  Φk)
          using fp relpowp_Suc_I2 by auto
        with rc show ?thesis by blast
      qed
      moreover have br2:
        ρ   ((K1  n) # (K2 @ n  δτ  K3) # Γ), n
               Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ) config
           Γk Ψk Φk k.
              ((Γ, n  ((K1 time-delayed by δτ on K2 implies K3) # Ψ)  Φ)k (Γk, Suc n  Ψk  Φk))
               ρ   Γk, Suc n  Ψk  Φk config
      proof -
        assume h2: ρ   ((K1  n) # (K2 @ n  δτ  K3) # Γ), n
                     Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ) config
        then have Γk Ψk Φk k. ((((K1  n) # (K2 @ n  δτ  K3) # Γ), n
                                  Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ))k (Γk, Suc n  Ψk  Φk))
                                 ρ   Γk, Suc n  Ψk  Φk config
          using h2 TimeDelayedBy.prems by simp
        from this obtain Γk Ψk Φk k
          where fp:(((K1  n) # (K2 @ n  δτ  K3) # Γ), n
                          Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ))k (Γk, Suc n  Ψk  Φk)
            and rc:ρ   Γk, Suc n  Ψk  Φk config by blast
        have (Γ, n  ((K1 time-delayed by δτ on K2 implies K3) # Ψ)  Φ)
               (((K1  n) # (K2 @ n  δτ  K3) # Γ), n
                   Ψ  ((K1 time-delayed by δτ on K2 implies K3) # Φ))
          by (simp add: elims_part timedelayed_e2)
        with fp relpowp_Suc_I2 have
          (Γ, n  ((K1 time-delayed by δτ on K2 implies K3) # Ψ)  Φ)Suc k (Γk, Suc n  Ψk  Φk)
          by auto
        with rc show ?thesis by blast
      qed
      ultimately show ?case using TimeDelayedBy.prems(2) by blast
  next
    case (WeaklyPrecedes K1 K2)
      have  Γ, n  ((K1 weakly precedes K2) # Ψ)  Φ config =
         ((# K2 n, # K1 n  (λ(x, y). x  y)) # Γ), n
             Ψ  ((K1 weakly precedes K2) # Φ) config
        using HeronConf_interp_stepwise_weakly_precedes_cases by simp
      moreover have ρ   ((# K2 n, # K1 n  (λ(x, y). x  y)) # Γ), n
                             Ψ  ((K1 weakly precedes K2) # Φ) config
             (Γk Ψk Φk k. ((Γ, n  ((K1 weakly precedes K2) # Ψ)  Φ)k (Γk, Suc n  Ψk  Φk))
                 (ρ   Γk, Suc n  Ψk  Φk config))
      proof -
        assume ρ   ((# K2 n, # K1 n  (λ(x, y). x  y)) # Γ), n
                         Ψ  ((K1 weakly precedes K2) # Φ) config
        hence Γk Ψk Φk k. ((((# K2 n, # K1 n  (λ(x, y). x  y)) # Γ), n
                                   Ψ  ((K1 weakly precedes K2) # Φ))k (Γk, Suc n  Ψk  Φk))
                           (ρ   Γk, Suc n  Ψk  Φk config)
          using  WeaklyPrecedes.prems by simp
        from this obtain Γk Ψk Φk k
          where fp:(((# K2 n, # K1 n  (λ(x, y). x  y)) # Γ), n
                                   Ψ  ((K1 weakly precedes K2) # Φ))k (Γk, Suc n  Ψk  Φk)
            and rc:ρ   Γk, Suc n  Ψk  Φk config by blast
        have (Γ, n  ((K1 weakly precedes K2) # Ψ)  Φ)
                 (((# K2 n, # K1 n  (λ(x, y). x  y)) # Γ), n
               Ψ  ((K1 weakly precedes K2) # Φ))
          by (simp add: elims_part weakly_precedes_e)
        with fp relpowp_Suc_I2 have (Γ, n  ((K1 weakly precedes K2) # Ψ)  Φ)Suc k (Γk, Suc n  Ψk  Φk)
          by auto
        with rc show ?thesis by blast
      qed
      ultimately show ?case using WeaklyPrecedes.prems(2) by blast
  next
    case (StrictlyPrecedes K1 K2)
      have  Γ, n  ((K1 strictly precedes K2) # Ψ)  Φ config =
         ((# K2 n, #< K1 n  (λ(x, y). x  y)) # Γ), n
           Ψ  ((K1 strictly precedes K2) # Φ) config
        using HeronConf_interp_stepwise_strictly_precedes_cases by simp
      moreover have ρ   ((# K2 n, #< K1 n  (λ(x, y). x  y)) # Γ), n
                             Ψ  ((K1 strictly precedes K2) # Φ) config
             (Γk Ψk Φk k. ((Γ, n  ((K1 strictly precedes K2) # Ψ)  Φ)k (Γk, Suc n  Ψk  Φk))
                 (ρ   Γk, Suc n  Ψk  Φk config))
      proof -
        assume ρ   ((# K2 n, #< K1 n  (λ(x, y). x  y)) # Γ), n
                         Ψ  ((K1 strictly precedes K2) # Φ) config
        hence Γk Ψk Φk k. ((((# K2 n, #< K1 n  (λ(x, y). x  y)) # Γ