User:Guillaume.Aucher/sandbox

Dynamic Epistemic Logic

edit

Dynamic Epistemic Logic (DEL) is a logic for reasoning about information change and exchange of information. These changes can be due to events that change factual properties of the actual world: for example a coin is publicly (or privately) flipped over. But what is mostly studied in dynamic epistemic logic are events that do not change factual properties of the world (they are called epistemic events) but that nevertheless bring about changes of (higher-order) beliefs: for example a card is revealed publicly (or privately) to be red.

Dynamic epistemic logic is a young field of research. It really started with Plaza’s logic of public announcement.[1] Independently, Gerbrandy and Groeneveld[2] proposed a system dealing moreover with private announcement and that was inspired by the work of Veltman.[3] Another system was proposed by van Ditmarsch whose main inspiration was the Cluedo game.[4] But the most influential and original system was the system proposed by Baltag, Moss and Solecki.[5][6] This system can deal with all the types of situations studied in the works above and provides a general approach to the topic.

DEL can be used in the context of multi-agent systems. It is built on top of epistemic logic to which it adds dynamics and events. Epistemic logic will first be recalled. Then, actions and events will enter into the picture and we will introduce the logical framework of DEL.[7]

Epistemic Logic

edit

Epistemic logic is a modal logic that is concerned with the logical study of the notions of knowledge and belief. It is thereby concerned with understanding the process of reasoning about knowledge and belief: which principles relating the notions of knowledge and belief are intuitively plausible ? As epistemology, it stems from the Greek word   or ‘episteme’ meaning knowledge. But epistemology is more concerned with analyzing the very nature of knowledge (addressing questions such as “What is the definition of knowledge?” or “How is knowledge acquired?”). In fact, epistemic logic grew out of epistemology in the middle ages thanks to the efforts of Burley and Ockham.[8] But the formal work, based on modal logic, that inaugurated contemporary research into epistemic logic dates back only to 1962 and is due to Hintikka.[9] It then sparked in the 1960’s discussions about the principles of knowledge and belief and many axioms for these notions were proposed and discussed.[10] For example, the interaction axioms   and   are often considered to be intuitive principles: if agent   Knows   then (s)he also Believes  , or if agent   Believes  , then (s)he Knows that (s)he Believes  . More recently, these kinds of philosophical theories were taken up by researchers in economics [11], artificial intelligence and theoretical computer science [12] where reasoning about knowledge is a central topic. Due to the new setting in which epistemic logic was used, new perspectives and new features such as computability issues were then added to the research agenda of epistemic logic.


Syntax

edit

  is a finite set whose elements are called agents and   is a set of propositional letters.

The epistemic language is an extension of the basic multi-modal language of modal logic with a common knowledge operator   and a distributed knowledge operator  . The epistemic language   is defined inductively by the following grammar in BNF:

 

where  ,   and  . The formula   is an abbreviation for  ,   is an abbreviation for   and   an abbreviation for  .

Group notions: general, common and distributed knowledge.

In a multi-agent setting there are three important epistemic concepts: general belief (or knowledge), distributed belief (or knowledge) and common belief (or knowledge). The notion of common belief (or knowledge) was first studied by Lewis in the context of conventions.[13] It was then applied to distributed systems and to game theory [14], where it allows to express that the rationality of the players, the rules of the game and the set of players are commonly known.

General knowledge.

General knowledge of   means that everybody in the group of agents   knows that  . Formally this corresponds to the following formula:

 

Common knowledge.

Common knowledge of   means that everybody knows   but also that everybody knows that everybody knows  , that everybody knows that everybody knows that everybody knows  , and so on ad infinitum. Formally, this corresponds to the following formula

 

As we do not allow infinite conjunction the notion of common knowledge will have to be introduced as a primitive in our language.

Before defining the language with this new operator, we are going to give an example introduced by that illustrates the difference between the notions of general knowledge and common knowledge. Lewis wanted to know what kind of knowledge is needed so that the statement  : “every driver must drive on the right” be a convention among a group of agents. In other words he wanted to know what kind of knowledge is needed so that everybody feels safe to drive on the right. Suppose there are only two agents   and  . Then everybody knowing   (formally  ) is not enough. Indeed, it might still be possible that the agent   considers possible that the agent   does not know   (formally  ). In that case the agent   will not feel safe to drive on the right because he might consider that the agent  , not knowing  , could drive on the left. To avoid this problem, we could then assume that everybody knows that everybody knows that   (formally  ). This is again not enough to ensure that everybody feels safe to drive on the right. Indeed, it might still be possible that agent   considers possible that agent   considers possible that agent   does not know   (formally  ). In that case and from  ’s point of view,   considers possible that  , not knowing  , will drive on the left. So from  ’s point of view,   might drive on the left as well (by the same argument as above). So   will not feel safe to drive on the right. Reasoning by induction, Lewis showed that for any  ,   is not enough for the drivers to feel safe to drive on the right. In fact what we need is an infinite conjunction. In other words, we need common knowledge of  :  .

Distributed knowledge.

Distributed knowledge of   means that if the agents pulled their knowledge altogether, they would know that   holds. In other words, the knowledge of   is distributed among the agents. The formula   reads as ‘it is distributed knowledge among the set of agents   that   holds’.

Semantics

edit

Epistemic logic is a modal logic. So, what we call an epistemic model   is just a Kripke model as used in modal logic. The possible worlds   are the relevant worlds needed to define such a representation and the valuation   specifies which propositional facts (such as ‘Ann has the red card’) are true in these worlds. Finally the accessibility relations   for each   can model either the notion of knowledge or the notion of belief. We set   in case the world   is compatible with agent  ’s belief (respectively knowledge) in world  . Intuitively, a pointed epistemic model  , where  , represents from an external point of view how the actual world   is perceived by the agents  .

For every epistemic model  ,   and  , we define the   by the following truth conditions:

 
 
 
 
 
 

where   is the transitive closure of  : we have that   if, and only if, there are   and   such that   and for all  ,  .

Despite the fact that the notion of common belief has to be introduced as a primitive in the language, we can notice that the definition of epistemic models does not have to be modified in order to give truth value to the common knowledge and distributed knowledge operators.

Card Example:

Players  ,   and   (standing for Ann, Bob and Claire) play a card game with three cards: a red one, a green one and a blue one. Each of them has a single card but they do not know the cards of the other players. This example is depicted in the pointed epistemic model   represented below. There is an arrow indexed by agent   from a possible world   to a possible world   when  . Moreover, reflexive arrows are omitted, which means that for all   and all  , we have that  .

 
Card Example: pointed epistemic model  

  stands for : "  has the red card''

  stand for: "C has the blue card''

  stands for: "B has the green card''

and so on...

Then, the following statements hold:

 

'All the agents know the color of their card'.

 

'  knows that   has either the blue or the green card and that   has either the blue or the green card'.

 

'Everybody knows that   has the red, green or blue card and this is even common knowledge among all agents'.

The notion of knowledge might comply to some constraints (or axioms) such as  : if agent   knows something, she knows that she knows it. These constraints might affect the nature of the accessibility relations   which may then comply to some extra properties. So, we are now going to define some particular classes of epistemic models that all add some extra constraints on the accessibility relations  . These constraints are matched by particular axioms for the knowledge operator  .

Below is a list of properties of the accessibility relations. We also give, below each property, the axiom which defines the class of epistemic frames that fulfill this property.[15]

Properties of accessibility relations and corresponding axioms
serial:  
D:  
transitive:  
4:  
Euclidean:  
5:  
reflexive:  
T:  
symmetric:  
B:  
confluent:  
.2:  
weakly connected:  
.3:  
semi-Euclidean:  
.3.2:  
R1:  
.4:  

Knowledge versus Belief

edit

We use the same notation   for both knowledge and belief. Hence, depending on the context,   will either read ‘the agent Knows that   holds’ or ‘the agent Believes that   holds’. A crucial difference is that, unlike knowledge, beliefs can be wrong: the Truth axiom   holds only for knowledge, but not necessarily for belief. We are going to examine other axioms, some of them pertain more to the notion of knowledge whereas some others pertain more to the notion of belief.

Axiomatization

edit

The Hilbert proof system for the basic modal logic K is defined by the following axioms and inference rules: for all  ,

Proof system   for  
Prop All axioms and inference rules of propositional logic
K  
Nec If   then  

The axioms of an epistemic logic obviously display the way the agents reason. For example the axiom K together with the rule of inference Nec entail that if I know   ( ) and I know that   implies   (  then I know that   ( ). Stronger constraints can be added. The following proof systems for   are often used in the literature.


KD45 = K+D+4+5 S4.3 = S4+.3 S5 = S4+5
S4 = K+T+4 S4.3.2 = S4+.3.2
S4.2 = S4+.2 S4.4 = S4+.4

We denote by   the set of proof systems  .

Moreover, for all  , we define the proof system   by adding the following axiom schemes and rules of inference to those of  . For all  ,

Dis  
E  
Mix  
Ind  

The relative strength of the proof systems for knowledge is as follows:

 

So, all the theorems of   are also theorems of   and  . Many philosophers claim that in the most general cases, the logic of knowledge is   or  .[16] Typically, in computer science, the logic of belief (doxastic logic) is taken to be and the logic of knowledge (epistemic logic) is taken to be , even if the logic is only suitable for situations where the agents do not have mistaken beliefs.

We discuss the most important axioms introduced above. Axioms T and 4 state that if the agent knows a proposition, then this proposition is true (axiom for ruth), and if the agent knows a proposition, then she knows that she knows it (axiom 4, also known as the “KK-principle”or “KK-thesis”). Axiom T is often considered to be the hallmark of knowledge and has not been subjected to any serious attack. In epistemology, axiom 4 tends to be accepted by internalists, but not by externalists.[17] Axiom 4 is nevertheless widely accepted by computer scientists (but also by many philosophers, including Plato, Aristotle, Saint Augustine, Spinoza and Shopenhauer, as Hintikka recalls ). A more controversial axiom for the logic of knowledge is axiom : This axiom states that if the agent does not know a proposition, then she knows that she does not know it. This addition of 5 to S4 yields the logic S5. Most philosophers (including Hintikka) have attacked this axiom, since numerous examples from everyday life seem to invalidate it.[18] In general, axiom 5 is invalidated when the agent has mistaken beliefs which can be due for example to misperceptions, lies or other forms of deception. Axiom D states that the agent’s beliefs are consistent. In combination with axiom K (where the knowledge operator is replaced by a belief operator), axiom D is in fact equivalent to a simpler axiom D' which conveys, maybe more explicitly, the fact that the agent’s beliefs cannot be inconsistent ( ):  . In all the theories of rational agency developed in artificial intelligence, the logic of belief is KD45. Note that all these agent theories follow the perfect external approach. This is at odds with their intention to implement their theories in machines. In that respect, an internal approach seems to be more appropriate since, in this context, the agent needs to reason from its own internal point of view. For the internal approach, the logic of belief is S5.[19][20]

For all  , the class of  –models or  –models is the class of epistemic models whose accessibility relations satisfy the properties listed above defined by the axioms of   or  . Then, for all  ,   is sound and strongly complete for   w.r.t. the class of  –models, and   is sound and strongly complete for   w.r.t. the class of  –models.

Decidability

edit

All the logics introduced are decidable. We list below the complexity of the satisfiability problem for each of them. Note that if the satisfiability problem for these logics becomes linear time if there are only finitely many propositional letters in the language. For  , if we restrict to finite nesting, then the satisfiability problem is NP-complete for all the modal logics considered. If we then further restrict the language to having only finitely many primitive propositions, the complexity goes down to linear time in all cases.[21][22]

    with common knowledge
K, S4 PSPACE PSPACE EXPTIME
KD45 NP PSPACE EXPTIME
S5 NP PSPACE EXPTIME

The computational complexity of the model checking problem is in P in all cases.

Adding Dynamics

edit

Dynamic Epistemic Logic (DEL) is a formalism trying to model epistemic situations involving several agents, and changes that can occur to these situations after incoming information or more generally incoming action. The methodology of DEL is such that it splits the task of representing the agents’ beliefs and knowledge into three parts:

  1. One represents their beliefs about an initial situation thanks to an epistemic model;
  2. One represents their beliefs about an event taking place in this situation thanks to an event model;
  3. One represents the way the agents update their beliefs about the situation after (or during) the occurrence of the event thanks to a product update.

Typically, an informative event can be a public announcement to all the agents of a formula  : this public announcement and correlative update constitute the dynamic part. Note that epistemic events can be much more complex than simple public announcement, including hiding information for some of the agents, cheating, lying, etc. This complexity is dealt with when we introduce the notion of event model. We will first focus on public announcements to get an intuition of the main underlying ideas of DEL.

Public Announcement Logic

edit

We start by giving a concrete example where DEL can be used, to better understand what is going on. This example is called the muddy children puzzle. Then, we will present a sketchy formalization of the phenomenon called Public Announcement Logic (PAL).

Muddy Children Example:

We have two children, A and B, both dirty. A can see B but not himself, and B can see A but not herself. Let   be the proposition stating that A is dirty, and   be the proposition stating that B is not dirty.

  1. We represent the initial situation by the pointed epistemic model represented below, where relations are equivalence relations. States   intuitively represent possible worlds, a proposition (for example  ) satisfiable at one of these states intuitively means that in the possible world corresponding to this state, the intuitive interpretation of   (A is dirty) is true. The links between states labelled by agents (A or B) intuitively express a notion of indistinguishability for the agent at stake between two possible worlds. For example, the link between   and   labelled by A intuitively means that A can not distinguish the possible world   from   and vice versa. Indeed, A can not see himself, so he cannot distinguish between a world where he is dirty and one where he is not dirty. However, he can distinguish between worlds where B is dirty or not because he can see B. With this intuitive interpretation we are brought to assume that our relations between states are equivalence relations.
     
    Initial Situation
  2. Now, suppose that their father comes and announces that at least one is dirty (formally,  ). Then we update the model and this yields the pointed epistemic model represented below. What we actually do is suppressing the worlds where the content of the announcement is not fulfilled. In our case this is the world where   and   are true. This suppression is what we call the update. We then get the model depicted. As a result of the announcement, both A and B do know that at least one of them is dirty. We can read this from the model.
     
    Updated epistemic model after the first announcement  
  3. Now suppose there is a second (and final) announcement that says that neither knows they are dirty (an announcement can express facts about the situation as well as epistemic facts about the knowledge held by the agents). We then update similarly the model by suppressing the worlds which do not satisfy the content of the announcement, or equivalently by keeping the worlds which do satisfy the announcement. This update process thus yields the pointed epistemic model represented below. By interpreting this model, we get that A and B both know that they are dirty, which seems to contradict the content of the announcement. Actually, we see here at work one of the main features of the update process: a proposition is not necessarily true after being announced. That is what we technically call “self-persistence” and this problem arises for epistemic formulas (unlike propositional formulas). One must not confuse the announcement and the update induced by this announcement, which might cancel some of the information encoded in the announcement (like in our example).[23]
 
Updated epistemic model after the second announcement

Now we roughly present the main ingredients of a logic called Public Announcement Logic (PAL), which formalizes these ideas and combines epistemic and dynamic logic.[24] We have seen that a public announcement of a proposition   changes the current epistemic model as in the figure below.

 
Eliminate all worlds which currently do not satisfy  

We define the language   inductively as follows:

 

where  .

The epistemic language is interpreted as in epistemic logic. The truth condition for the new dynamic action modality   is defined as follows:

 

where   with

 ,

  for all   and

 .

The formula   intuitively means that after a truthful announcement of  ,   holds.

The proof system   defined below is sound and strongly complete for   w.r.t.   .

ML The Axioms and the rules of inference of modal logic
Red1  
Red2  
Red3  
Red4  

Here is a theorem of PAL:  . It states that after a public announcement of  , the agent knows that   holds.

PAL is decidable, its model checking problem is solvable in polynomial time and its satisfiability problem is PSPACE-complete [25].

The General Case

edit

In this section, we focus on items 2 and 3 above, namely on how to represent events and on how to update an epistemic model with such a representation of events by means of a product update.

Event Model

edit

Epistemic logic is used to model how the agents perceive the actual world in terms of beliefs about the world and about the other agents’ beliefs. The insight of the DEL approach is that one can describe how an event is perceived by the agents in a very similar way. Indeed, the agents’ perception of an event can also be described in terms of beliefs and knowledge. For example, the private announcement of   to   that her card is red can also be described in terms of beliefs and knowledge: while   tells   that her card is red (event a)   believes that nothing happens (event b). This leads to define the notion of event model whose definition is very similar to that of an epistemic model.

A pointed event model   represents how the actual event represented by   is perceived by the agents. Intuitively,   means that while the possible event represented by   is occurring, agent   considers possible that the possible event represented by   is actually occurring.

An event model is a tuple   where:

  •   is a non-empty set of possible events,
  •   is an accessibility relation on  , for each  ,
  •   is a function assigning to each possible event a formula of  . The function   is called the precondition function.

We write   for  , and   is called a pointed event model (  often represents the actual event).   denotes the set  .

Card Example:

Let us resume the card example and assume that players   and   show their card to each other. As it turns out,   noticed that   showed her card to   but did not notice that   did so to  . Players   and   know this. This event is represented below in the event model  .

The boxed possible event   corresponds to the actual event ‘players   and   show their and cards respectively to each other’ (with precondition  ),   stands for the event ‘player   shows her card’ (with precondition  ) and   stands for the atomic event ‘player   shows her card’ (with precondition  ). Players   and   show their cards to each other, players   and   ‘know’ this and consider it possible, while player   considers possible that player   shows her red card and also considers possible that player   shows her green card, since he does not know her card. In fact, that is all that player   considers possible.

 
Pointed event model  : Players A and B show their cards to each other in front of player C

Another example of event model is given below. This second example corresponds to the event whereby Player   shows her red card publicly to everybody. Player   shows her red card and players  ,   and   ‘know’ it, players  ,   and   ‘know’ that each of them ‘know’ it, etc. In other words, there is common knowledge among players  ,   and   that player   shows her red card.

 
Public event model  

Product Update

edit

The DEL product update is defined below.[5] This update yields a new  -model   representing how the new situation which was previously represented by   is perceived by the agents after the occurrence of the event represented by  .

Let   be an epistemic model and let   be an event model. The product update of   and   is the epistemic model   defined as follows: for all   and all  ,

  •  ,
  •   and  ,
  •  .

If   and   are such that   then   denotes the pointed epistemic model  .

Card Example:

As a result of the first event described above (Players   and   show their cards to each other in front of player  ), the agents update their beliefs. We get the situation represented in the pointed epistemic model   below. In this  –model, we have for example the following statement:   It states that player   ‘knows’ that player   has the card but player   believes that it is not the case.

 
Updated pointed epistemic situation  

The result of the second event described above (Public event model of ) is described below. In this pointed epistemic model, the following statement holds:  . It states that there is common knowledge among   and   that they know the true state of the world (namely   has the red card, Bob has the green card and   has the blue card), but   does not know it.

 
Updated pointed epistemic model  

See also

edit

Notes

edit
  1. ^ Plaza, Jan (2007-07-26). "Logics of public communications". Synthese. 158 (2): 165–179. doi:10.1007/s11229-007-9168-7. ISSN 0039-7857.
  2. ^ Gerbrandy, Jelle; Groeneveld, Willem (1997-04-01). "Reasoning about Information Change". Journal of Logic, Language and Information. 6 (2): 147–169. doi:10.1023/A:1008222603071. ISSN 0925-8531.
  3. ^ Veltman, Frank (1996-06-01). "Defaults in update semantics". Journal of Philosophical Logic. 25 (3): 221–261. doi:10.1007/BF00248150. ISSN 0022-3611.
  4. ^ Ditmarsch, Hans P. van (2002-06-01). "Descriptions of Game Actions". Journal of Logic, Language and Information. 11 (3): 349–365. doi:10.1023/A:1015590229647. ISSN 0925-8531.
  5. ^ a b Alexandru Baltag, Lawrence S. Moss and Slawomir Solecki (1998). "The Logic of Public Announcements and Common Knowledge and Private Suspicions". Theoretical Aspects of Rationality and Knowledge (TARK).
  6. ^ Baltag, Alexandru; Moss, Lawrence S. (2004-03-01). "Logics for Epistemic Programs". Synthese. 139 (2): 165–224. doi:10.1023/B:SYNT.0000024912.56773.5e. ISSN 0039-7857.
  7. ^ A distinction is sometimes made between events and actions, an action being a specific type of event performed by an agent.
  8. ^ Boh, Ivan (1993). Epistemic Logic in the later Middle Ages. Routledge. ISBN 0415057264.
  9. ^ Jaako, Hintikka (1962). Knowledge and Belief, An Introduction to the Logic of the Two Notions. Ithaca and London: Cornell University Press. ISBN 1904987087.
  10. ^ Lenzen, Wolfgang (1978). "Recent Work in Epistemic Logic". Acta Philosophica Fennica.
  11. ^ Battigalli, Pierpaolo; Bonanno, Giacomo (1999-06-01). "Recent results on belief, knowledge and the epistemic foundations of game theory". Research in Economics. 53 (2): 149–225. doi:10.1006/reec.1999.0187.
  12. ^ Ronald Fagin, Joseph Halpern, Yoram Moses and Moshe Vardi (1995). Reasoning about Knowledge. MIT Press. ISBN 9780262562003.{{cite book}}: CS1 maint: multiple names: authors list (link)
  13. ^ Lewis, David (1969). Convention, a Philosophical Study. Harvard University Press. ISBN 0674170253.
  14. ^ Aumann, Robert J. (1976-11-01). "Agreeing to Disagree". The Annals of Statistics. 4 (6): 1236–1239.
  15. ^ Patrick Blackburn and Maarten de Rijke and Yde Venema (2001). Modal Logic. Cambridge University Press. ISBN 978-0521527149.
  16. ^ Aucher, Guillaume (2015-03-18). "Intricate Axioms as Interaction Axioms". Studia Logica. 103 (5): 1035–1062. doi:10.1007/s11225-015-9609-0. ISSN 0039-3215.
  17. ^ "Internet Encyclopedia of Philosophy  » KK Principle (Knowing that One Knows) Internet Encyclopedia of Philosophy » Print". www.iep.utm.edu. Retrieved 2015-12-11.
  18. ^ For example, assume that a university professor believes (is certain) that one of her colleague’s seminars is on Thursday (formally  ). She is actually wrong because it is on Tuesday ( ). Therefore, she does not know that her colleague’s seminar is on Tuesday ( ). If we assume that axiom is valid then we should conclude that she knows that she does not know that her colleague’s seminar is on Tuesday ( ) (and therefore she also believes that she does not know it:  ). This is obviously counterintuitive.
  19. ^ Aucher, Guillaume (2010-02-13). "An Internal Version of Epistemic Logic". Studia Logica. 94 (1): 1–22. doi:10.1007/s11225-010-9227-9. ISSN 0039-3215.
  20. ^ In both philosophy and computer science, there is formalization of the internal point of view. Perhaps one of the dominant formalisms for this is Moore’s auto-epistemic logic. In philosophy, there are models of full belief like the one offered by Levi, which is also related to ideas in auto-epistemic logic.
  21. ^ Halpern, Joseph Y.; Moses, Yoram. "A guide to completeness and complexity for modal logics of knowledge and belief". Artificial Intelligence. 54 (3): 319–379. doi:10.1016/0004-3702(92)90049-4.
  22. ^ Halpern, Joseph Y. (1995-06-01). "The effect of bounding the number of primitive propositions and the depth of nesting on the complexity of modal logic". Artificial Intelligence. 75 (2): 361–372. doi:10.1016/0004-3702(95)00018-A.
  23. ^ Ditmarsch, Hans Van; Kooi, Barteld (2006-07-01). "The Secret of My Success". Synthese. 151 (2): 201–232. doi:10.1007/s11229-005-3384-9. ISSN 0039-7857.
  24. ^ David Harel, Dexter Kozen and Jerzy Tiuryn (2000). Dynamic Logic. MIT Press. ISBN 978-0262082891.
  25. ^ Lutz, Carsten (2006-01-01). "Complexity and Succinctness of Public Announcement Logic". Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems. AAMAS '06. New York, NY, USA: ACM: 137–143. doi:10.1145/1160633.1160657. ISBN 1-59593-303-4.

References

edit
  • van Benthem, Johan (2011). Logical Dynamics of Information and Interaction. Cambridge University Press. ISBN 978-0521873970.
  • van Ditmarsch, Hans, van der Hoek, Wiebe, and Kooi, Barteld (2007). Dynamic Epistemic Logic. Ithaca: volume 337 of Synthese library. Springer. ISBN 978-1-4020-5839-4.{{cite book}}: CS1 maint: multiple names: authors list (link)
  • Hans, van Ditmarsch; Halpern, Joseph; van der Hoek, Wiebe; Kooi, Barteld (2015). Handbook of Epistemic Logic. London: College publication. ISBN 978-1848901582.
  • Fagin, Ronald; Halpern, Joseph; Moses, Yoram; Vardi, Moshe (2003). Reasoning about Knowledge. Cambridge: MIT Press. ISBN 978-0-262-56200-3. A classic reference.
  • Hintikka, Jaakko (1962). Knowledge and Belief - An Introduction to the Logic of the Two Notions. Ithaca: Cornell University Press. ISBN 978-1-904987-08-6..
edit