An Introduction to Deductive Reasoning

David W.C. Telson · 2019/07/14 · 12 minute read

One of my absolute favorites books is Probability: the Logic of Science by E.T. Jaynes. The book was published posthumously in 2003, after Jayne’s death in 1998. While the book presents a compelling philosophical argument for probability theory, it is known to be incomplete and to contain errors. Despite this, I think it is a very important work in the history of probability theory, and a great foundation for how one can reason with incomplete information.

In the initial chapters of the book, Jaynes begins with an introduction to deductive logic, which is considered by many to be the “gold standard” of reasoning. After Jaynes explicates the general concept of deductive logic, he goes on to expand it to cases where there is insufficient information to perform deduction via reference to George Polya’s plausible reasoning. It is only after this point that he invokes Cox’s theorem to quantify plausible reasoning into the familiar form of probability theory.

With all that said, I thought it would be useful to explore deductive reasoning on our own, to serve as a foundation for plausible, and then probabilistic reasoning in the same vein as Jaynes. I will admit that this is pedagogical for my own understanding as well. Aside from using Jayne’s book, I also will reference ideas from the book How to Prove It by Daniel Velleman.

We begin with the notion of a proposition. We define a proposition as a statement about the universe that is either true or false. There is a lot more to the nature of propositions e.g. the implications of Gödel’s famous incompleteness theorems; but we will hold these ideas back for now. Here are some propositions we could consider:

Author’s note: I apologize for the use of in-quote and out-quote punctuation. I am hoping to make it clear that a proposition is a complete statement, while also closing out my sentences with appropriate punctuation. I recognize there might be a better way to do so.

Notice that we use the \(\equiv\) sign rather than \(=\), as we mean “equivalent by definition”, that is \(P_1\) actually stands for \(``\text{My favorite color is purple.''}\), or more precisely \(P_1\) and \(``\text{My favorite color is purple.''}\) have the same “truth value”. Also note that \(P_1\) is an opinion (held by me), rather than some universal truth. Both propositions are both vague in some sense, e.g. when is “today?” and what geography is this limited to, as surely it is bound to rain some amount somewhere on the planet we call “Earth” today. We do not let this vagueness bother us for now.

Both of these propositions have a “truth” value, that is they are necessarily either true or false (at least in the form of logic we are using). We sometimes write something that is always true (also called a tautology) as \(\top\), and something that is always false (also called a contradiction) as \(\bot\). We use deductive logic to reason about the truth or falsity about a proposition, given other propositions and logical connectives (also called logical operators) between them.

The first logical operator we will discuss is called negation, written as \(\neg\). We use negation in tandem with a proposition, e.g. \(\neg P\). What does negation do? Intuitively it gives us the opposite (or negative) of the proposition e.g., if \(P_2 \equiv ``\text{It will rain today.''}\), then \(\neg P_2 \equiv ``\text{It will not rain today.''}\). If we say that \(P_3 \equiv \neg P_2\), then \(\neg P_3 \equiv \neg\neg P_2 \equiv P_2\). This operator will be very useful in practice.

The next operator we wish to introduce is “disjunction”, or more intuitively, the “or” operator, symbolized with \(\vee\). Whereas \(\neg\) was a “unary” (with the prefix “un” as in one e.g., unity) operator, \(\vee\) is a “binary” operator i.e. an operator that works on two propositions. For instance, we could say \(P_2 \vee P_3\), that is \(``\text{Either it will or will not rain today.''}\). We could assign this disjunction to a proposition \(P_4 \equiv P_2 \vee P_3\). Disjunction is not exclusive: one or both of the propositions can be true for the disjunction to return true. An “exclusive disjunction” or “XOR” operator does exist, and is often symbolizes with \(\oplus\).

Notice that since \(P_3 \equiv \neg P_2\), we have \(P_4 \equiv P_2 \vee \neg P_2\). Since \(P_2\) is either true or false, and its negation has the opposite truth value, we must have \(P_4 \equiv P_2 \vee \neg P_2 \equiv \top\), that is \(P_4\) must always be true i.e. \(P_4\) is a tautology. This is not necessarily true of all disjunctions, just in the case where we have a disjunction of a proposition and its negation.

The opposite of “disjunction” could be considered to be “conjunction”, that is the “and” operator, symbolized with \(\wedge\). Like \(\vee\), \(\wedge\) works on two propositions, e.g. \(P_6 \equiv P_5 \wedge P_2 \equiv``\text{It will be cloudy today and it will rain today.''}\), where \(P_5 \equiv `\text{It will be cloudy today.''}\). Notice the effect when we use conjunction on a proposition and its negation: \(P_7 \equiv P_2 \wedge \neg P_2 \equiv \bot\), i.e. the conjunction of a proposition and its negation is necessarily false i.e. it is a contradiction.

Interestingly, we don’t actually need to define conjunction to use it: it can be derived from disjunction and negation. Even more interesting is that all three logical operators can be derived from one of two single operators, referred to as NAND (not both) with the symbol \(\uparrow\), and NOR (not either) with the symbol \(\downarrow\). We’re not going to get into concept of “functional completeness”, but it is really cool and I highly suggest you check it out.

Before we go any further, we should really talk about “Truth Tables”. A truth table is an incredibly useful tool in deductive logic that lets us understand the relationships between propositions given all possible truth values the propositions could assume. What follows is a truth table for two arbitrary propositions \(P\) and \(Q\) and their conjunction \(P \wedge Q\). Here \(T\) represents a truth value of “True” and \(F\) stands for a truth value of “False”:

\[ \begin{array}{|c|c|c|} P & Q & P \wedge Q\\ \hline T & T & T\\ T & F & F\\ F & T & F\\ F & F & F\\ \end{array} \]

We can extend this table to show all of our elementary logical operators:

\[ \begin{array}{|c|c|c|c|c|} P & Q & \neg P & P \wedge Q & P \vee Q\\ \hline T & T & F & T & T\\ T & F & F & F & T\\ F & T & T & F & T\\ F & F & T & F & F\\ \end{array} \]

We can even include a third proposition, and look at the relationships implied, for instance:

\[ \begin{array}{|c|c|c|c|} P & Q & R & P \vee Q \vee R\\ \hline T & T & T & T \\ T & T & F & T \\ T & F & T & T \\ F & T & T & T \\ F & F & T & T \\ F & T & F & T \\ T & F & F & T \\ F & F & F & F \\ \end{array} \]

As we add propositions that are independent (where its truth value is not determined by the truth values of the other propositions) with non trivial truth values (i.e. not contradictions or tautologies), we have to add additional rows to our truth table. Can we determine how many rows will be required given a number of independent propositions with non-trivial truth values? Yes we can! Let’s examine how:

So we need a truth table with \(2^n\) rows to represent \(n\) independent and non-trivial propositions. The number of columns depends on what relations among the propositions we care to examine.

We should note that the logical operators discussed thus far have an order of operations just like in arithmetic or algebra. Technically this order is arbitrary, but in this article we adopt the order of precedence featured in this article. Additionally, logical relations among propositions have a number of equivalencies that help to simplify the analysis of their truth values. We don’t dive into these equivalencies in this post, but some useful ones are listed here.

With the concept of propositions, logical connectives, and truth tables under our belt, we can now speak to the crown jewel of deductive reasoning: the deductive argument. First we wish to define another logical operator, known as “implication”, which is symbolizes as \(\implies\). We say that \(P \implies Q\) if whenever \(P\) is true, \(Q\) is also true. Note that “implies” doesn’t have the same meaning it does in common speech: implication indicates no causal relationship between \(P\) and \(Q\). It doesn’t even mean that if \(P\) is false, then \(Q\) is necessarily false, only that when \(P\) is true, \(Q\) is also true. Let’s examine the truth table of implication.

\[ \begin{array}{|c|c|c|} P & Q & P \implies Q\\ \hline T & T & T\\ T & F & F\\ F & T & T\\ F & F & T\\ \end{array} \]

A keen observer might note that implication is identical to \(\neg P \vee Q\):

\[ \begin{array}{|c|c|c|c|c|} P & Q & \neg P & \neg P \vee Q & P \implies Q\\ \hline T & T & F & T & T\\ T & F & F & F & F\\ F & T & T & T & T\\ F & F & T & T & T\\ \end{array} \]

An extension of implication that we note here, but will not dive any further into is the “bi-conditional”, or “if and only if”, symbolized \(\iff\). It is an implication that goes both ways, e.g., \(P \iff Q \equiv (P\implies Q)\wedge(Q\implies P)\).

Using \(\implies\), we can make deductive arguments, that is we can reason deductively. The form of a deductive argument is simple: we begin with some premises and we determine the truth of a conclusion.

Suppose we are given that the propositions \(``\text{If it is rainy today, it is cloudy today.''}\) and \(``\text{It is rainy today.''}\) are both true. We can therefore conclude the proposition \(``\text{It is cloudy today.''}\) is true. This conclusion is necessarily true i.e. it is required by the premises. Don’t let the content of the statements get in the way of your thinking: its the form that matters, not the content.

We can restate our deductive argument symbolically, with \(P \equiv ``\text{It is rainy today.''}\), \(Q \equiv ``\text{It is cloudy today.''}\), and \(\therefore\) means “therefore”:

\[ \begin{aligned} & \ P \implies Q\\ & \ P \\ \hline & \therefore \ Q \end{aligned} \]

If we examine the truth table for implication, we see that in the row where \(P \implies Q\) and \(P\) are both true, \(Q\) is also true. There are no other cases where both premises are true, so the truth value of \(Q\) is forced upon us! This is the power of deductive logic: the premises guarantee the truth value of the conclusion. This specific form of reasoning is referred to as “Modus Ponens”, which is Latin for “affirming by affirming”.

Note that the converse is not true i.e. that is if we know \(Q\), we cannot guarantee that \(P\) is also true. This is because \(P\) can be true or false given that \(Q\) is true. This can be seen via the truth table for implication. This is a formal fallacy (i.e. error of form) referred to as “Affirming the Consequent”. This fallacy is equivalent to the form of “abductive reasoning” i.e. reasoning to the best explanation. We will have a lot more to say of this in the future.

A deductive argument that suffers from an error of form is said to be “invalid”. An argument whose conclusions are guaranteed by its premises is said to be valid. A deductive argument whose premises are false is referred to as “unsound”, whereas an argument whose premises are true is referred to as “sound.” The soundness of an argument cannot necessarily be guaranteed by form of the argument! This is an important issue that has major implications for how we reason in the real world.

Implication also gives us another form of deductive argument through “Modus Tollens”, which is Latin for “denying by denying”. It takes the following symbolic form:

\[ \begin{aligned} & \ P \implies Q\\ & \ \neg Q \\ \hline & \therefore \ \neg P \end{aligned} \]

This line of reasoning is guaranteed as demonstrated via the truth table for implication. Further, it is the mode of reasoning advocated by the philosopher of science Karl Popper to resolve David Hume’s “Problem of Induction”. Popper’s program is referred to as Falsificationism, and will be of interest to us later, as will “the Problem of Induction.”

Deductive reasoning is the workhorse of logic and mathemtical proof. In principle we would like to apply deductive reasoning to all of our problems to ensure that our conclusions are guaranteed to be true; however in order to perform deductive logic we must adopt premises, and for our argument to be sound we must know that our premises are true. This is often a luxury that we do not have in addressing the problems that we wish to solve.

How then do we reason with insufficient information to perform deduction? Humans (and arguably other forms of life) reason with incomplete and uncertain information all of the time. How is it that we are successful in our endeavors? This is one of the great questions of epistemology (the philosophy of knowledge) and the philsophy of science (arguably a sub-disciple of epistemology focusing on how we conduct science). The answer to which would have major implications for design and application of statistics and machine learning.

I look forward to exploring this question more over the course of this blog!