Defeasible reasoning [1] is a field of interest in both Philosophy and Computer Science (the subdiscipline of Artificial Intelligence (AI)). While the philosophical history of the field goes back to Aristotle, AI has only shown interest in it over the last 40 years.  What is called nonmonotonic reasoning in AI is roughly the same as defeasible reasoning in philosophy [2].

Reasoning is the process of deriving conclusions based on existing knowledge, using a problem solving strategy. Non-monotonic reasoning deals with incomplete or uncertain knowledge, where conclusions can be invalidated when adding new information (facts). 

Here is an example:
Something that looks red to me may justify me in believing that it is red, but if I subsequently learn that the object is illuminated by red light and I know that that can make things look red when they are not, then I cease to be justified in believing that the object is red.

A system to deal with non-monotonic knowledge is the Truth Maintenance System (TMS) [3], a problem solver subsystem for reasoning programs, that is concerned with revising sets of beliefs and maintaining the truth in the system when new information contradicts existing information. More information about the TMS and how this is implemented in the Drools engine is provided below. Other formalisms for defeasible reasoning include logic-based approaches like defeasible logic and argumentation.

Defeasible logic [4]: created by Donald Nute, is a simple and efficient rule based non-monotonic formalism. The main intuition of the logic is to be able to derive “plausible” conclusions from partial and sometimes conflicting information. A conclusion can be withdrawn when new information is added, hence conclusions are considered tentative. Defeasible logic is useful when we want to express that some statements are “usually” or “most of the time” true, but not strictly always.

Example: (in DeLP programming, [5])
flies(X) -< bird(X)   // a bird typically flies (a defeasible rule)
bird(X) <- penguin(X)  // a penguin is a bird (a strict rule)
~flies(X) <- penguin(X)  // penguins don’t fly (a strict rule)
penguin(tweety) //tweety is a penguin (a fact)

Query: flies(tweety)
Answer: NO  //because tweety is a penguin and penguins don’t fly

In the above example, the first statement (that a bird flies) is not always true and for that reason it is defined as a defeasible rule. So, in the case of penguin(tweety), the query flies(tweety) returns NO because of the strict rule ~flies(X) <- penguin(X) that is stronger than defeasible rule flies(X) -< bird(X).

Argumentation: is a principled form of reasoning with conflicting information. It consists of defining arguments, and attacks or preferences between them, and a process of evaluating these arguments to identify plausible conclusions. The previous example in DeLP can be implemented as an argumentation theory, with preferences between arguments denoting priorities, instead of using strict and defeasible rules. More information about this approach is provided in the Conclusions and Further Reading paragraph.

rule(r1(X), fly(X), [bird(X)]).   // birds fly
rule(r2(X), neg(fly(X)), [penguin(X)]).   // penguins don’t fly

rule(f1, bird(tweety), []).  // tweety is a bird
rule(f2, penguin(tweety), []).   // tweety is a penguin

rule(pr1(X), prefer(r2(X), r1(X)), []).   // r2 is stronger than r1

Query: prove([neg(fly(tweety))],Delta).
Answer: Delta = [f2, r2(tweety)]   // Delta = the admissible argument for the query

Query: prove([fly(tweety)],Delta.
Answer:  // has no solution

Drools and Truth Maintenance Systems [3],[7],[8]


The basic function of the Drools engine is to match data to business rules and determine whether and how to execute rules. To ensure that relevant data is applied to the appropriate rules, the Drools engine makes inferences based on existing knowledge and performs the actions based on the inferred information. The Drools engine uses truth maintenance to justify the assertion of data and enforce truthfulness when applying inferred information to rules, to identify inconsistencies and to handle contradictions.

In the Drools engine, data is inserted as facts, using either stated (defined with insert()) or logical insertions (defined with insertLogical()). After stated insertions, facts are generally retracted explicitly. After logical insertions, the facts are automatically retracted when no condition supports the logical insertion. A fact that is logically inserted is considered to be justified by the Drools engine.


rule “Allow sweets on Saturday” when
   $d : DietAssistant ( day == “Saturday” )
   insertLogical( new AllowSweets( $d ) );

The fact (AllowSweets($d)) depends on the truth of the “when” clause. When the rule (DietAssistant(day==”Saturday”)) becomes false the fact is automatically retracted.

In the Drools engine there is a “simple” implementation of a TMS available and an experimental implementation of a justified TMS (JTMS). JTMS implementation allows a logical insertion to have a positive or a negative label. This allows for contradiction handling. A logical insertion will only exist in the main working memory, as long as there is no conflict in the labeling – i.e. it must be one or more positive labels, and no negative labels.


rule “Allow sweets on Saturday” when
   $d : DietAssistant ( day == “Saturday” )
   insertLogical( new AllowSweets( $d ) );
rule “Do not allow sweets” when
   $d : DietAssistant ()
   insertLogical( new AllowSweets( $d ), “neg” );

The above rules are executed in the order given, so first the AllowSweets object is inserted into the working memory, and then, as a result of the “Do not allow sweets” rule execution, the object is retracted, because of the conflict.

Limitations of current JTMS: contradiction handling is done at the level of logical insertion, meaning that the entire object is retracted from the memory when a conflict occurs.

A new approach to TMS: contradiction handling at the level of a specific property change, so that the object remains in the memory and indicates the property changes that are in conflict. An idea towards this approach is to use a wrapper class that will be responsible to update the properties of the object to be inserted into the working memory, and provide methods for restoring the object’s state, in the case of a conflict. So instead of insertLogical( new AllowSweets($d)) we can use a command-wrapper class and do insertLogical( new Command($d, {property-changes}), with property changes given in the form of pairs (property,value), e.g. {(allowDenySweets,Ture),(freeDay,Wednesday)}.
Then, the contradiction handling process will consider the changes inserted in the working memory at the property level. If positive and negative changes occur for a single property, this would be considered a conflict and the state of that particular property will be restored accordingly.


public class Person {
   private String name;
   private String onDiet=”no”;
   private String allowDenySweets=”tbd”;
   private String freeDay=null;

rule “Rule1: allow sweets, set freeDay Monday” when
   $p : Person ()
   insertLogical(new Command($p,{(allowDenySweets,”allow”),(freeDay,”Monday”)}));
rule “Rule2: set freeDay Wednesday” when
   $p : Person ()
   insertLogical(new Command($p,{(freeDay,”Wednesday”)}));
rule “Rule3: do now allow sweets” when
   $p : Person ()
   insertLogical(new Command($p,{(allowDenySweets,”allow”)}),“neg”));


Rules 1, 2 and 3 are activated in the order given. The activation of Rule1 results in the change of two property values of the object p, the property allowDenySweets is set to “allow” and the property freeDay is set to “Monday”. The activation of Rule2 results in the change of the property freeDay to “Wednesday”. The activation of Rule3 will cause a conflict with Rule1, and the changes in the property allowDenySweets. This will result in restoring the value of this property to its original value (before the activation of Rule1, that is the value of “tbd”). The value of the property freeDay will not be affected.

Conclusions and Further Reading:

Classical methods of knowledge representation and reasoning are based on the assumption that the information available is complete and consistent. However, in many cases, problems or domains, we’ll find incomplete statements or rules, with unknown conditions, and contradictory conclusions. Defeasible reasoning addresses the problem of reasoning under uncertainty, by allowing conclusions to be retracted in the presence of new information.
Truth maintenance systems, defeasible logic and argumentation are some approaches towards defeasibility. They are all presented in the previous paragraphs with a short introduction and examples.
Gorgias is a general argumentation framework that combines the ideas of preference reasoning and abduction. It was developed as a Prolog meta-interpreter to support a dialectical argumentation process for the development of applications of argumentation. More information can be found in the paper “Gorgias: Applying argumentation” [6].


[1] Defeasible Reasoning, Stanford Encyclopedia of Philosophy.
[2] Pollock, J. L. Defeasible Reasoning.
[3] Problem Solving and Truth Maintenance Systems.
[4] Defeasible Logic.
[5] Defeasible Logic Programming An Argumentative Approach.
[6] Gorgias: applying argumentation.
[7] Drools Documentation.
[8] Jon Doyle, A Truth Maintenance System.

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments