In Doyle's  TMS, an initial set of assumptions is provided to the inference engine. Doyle's TMS keeps track of the reasons for beliefs as the inference engine runs. A belief is kept in the current belief set as long as it has at least one valid derivation. If a belief has no valid derivation, then it is out of the current belief set. Associated with each belief is a set of justifications-the reasons for or derivations of that belief. Thus, a belief is in the belief set if and only if there is at least one valid justification in its justification set; otherwise, the belief is out. The beliefs that are part of a derivation of a belief are that belief's ancestors; the beliefs that are derived from a belief are that belief's descendants.
Doyle  points out that most systems represent a belief P in any one of three states: true when only P is believed, false when only ~P is believed, unknown when neither P nor ~P is believed. Doyle argues that a fourth state is necessary to express the case when both P and ~P are believed at the same time. Doyle does not name the fourth state that he proposes, so it will be referred to here as contradicted.
In Doyle's  TMS, the inference engine runs as it would without the TMS until a contradicted belief is found within the belief set. When a contradicted belief is found, the truth maintenance process is invoked on the belief set. Doyle's truth maintenance process marks both of the contradicting beliefs as out of the belief set. Then each of the beliefs is removed from the justification sets of its consequences (beliefs that were derived directly from it). The truth maintenance process is called recursively on any of the consequences that do not have any other valid justifications.
Assuming that the inference engine is sound, the only way a contradiction can occur is if at least one of the initial assumptions is incorrect. The problem is to identify which assumption is the culprit (to use Doyle's terminology). Intuitively, one would expect that each assumption is more likely to be correct than incorrect. This means that the more assumptions that can keep their original truth value without leading to a contradiction, the more likely that that particular assignment of truth-values is correct. With an assumption set of size n, there are 2^n possible assignments of truth-values. The brute-force method would be to start with the initial assignment, traverse the belief network to assign each belief a value of in or out, and check for a contradiction. If a contradiction is found, then change just one of the assumption assignments and check for a contradiction again. Repeat the process until a consistent assignment is found or all possible assignments have been exhausted. If a consistent assignment is found, then keep that assignment and start the inference engine again.
The brute-force method is very expensive, and it would have to be run every time a contradiction is found. Even if it could be made inexpensive, the resulting belief set could be radically different than what it was previously, leading to erratic behavior of the system. Doyle  offers four heuristics for altering the assignment of truth-values to assumptions after a contradiction is found. They are described here only in summary. See Doyle  for complete descriptions.
None of Doyle's alternatives is particularly attractive. Random rejection does not use reasoning to select an assumption to discard. Default assumptions and sequences of alternatives do provide greater control, but require extra input from some outside source, most likely a human user. Equivalence class representatives only help when a problem solver can produce multiple solutions to the same problem.
A new method for selecting an assumption to reject is needed. This method should not require predefined alternatives and should resolve any contradiction, not just those that are caused by the arbitrary choice of a solution from an equivalence class. The new method can not rely on the inference engine because inference has already failed by producing a contradiction. The method must rely on the structure of the TMS graph to decide which assumption should be rejected when a contradiction is found. The method should be domain-independent and, if possible, mimic human cognition.
The rest of this paper describes such a method: the Truth Maintenance System with Cognitive Dissonance (TMS-CD). As the name implies, the TMS-CD is an attempt to incorporate the ideas of Cognitive Dissonance Theory into the structures of a TMS. Section 3 provides a summary background of Cognitive Dissonance Theory for the interested reader. Some readers may wish to skip directly to Section 4, which describes the TMS-CD itself.