Home| Letters| Links| RSS| About Us| Contact Us

On the Frontline

What's New

Table of Contents

Index of Authors

Index of Titles

Index of Letters

Mailing List


subscribe to our mailing list:



SECTIONS

Critique of Intelligent Design

Evolution vs. Creationism

The Art of ID Stuntmen

Faith vs Reason

Anthropic Principle

Autopsy of the Bible code

Science and Religion

Historical Notes

Counter-Apologetics

Serious Notions with a Smile

Miscellaneous

Letter Serial Correlation

Mark Perakh's Web Site

What does
"Intelligent Agency by Proxy"
Do for the Design Inference?

By Wesley R. Elsberry

Posted May 6, 2002

William A. Dembski wrote "The Design Inference" (TDI) as his technical explication of the logic and methods of inferring that an event must be explained as being due to design. In other essays aimed at less technically inclined audiences (and the book, "Intelligent Design", which collects some of those essays), Dembski has also written about making design inferences (DIs). There are certain aspects of Dembski's popular writings which appear to be at odds with, or at least unsupported by, the technical explication of "The Design Inference".

[Quote]
Thus, to claim that laws, even radically new ones, can produce specified complexity is in my view to commit a category mistake. It is to attribute to laws something they are intrinsically incapable of delivering-indeed, all our evidence points to intelligence as the sole source for specified complexity. Even so, in arguing that evolutionary algorithms cannot generate specified complexity and in noting that specified complexity is reliably correlated with intelligence, I have not refuted Darwinism or denied the capacity of evolutionary algorithms to solve interesting problems. In the case of Darwinism, what I have established is that the Darwinian mechanism cannot generate actual specified complexity. What I have not established is that living things exhibit actual specified complexity. That is a separate question.
Does Davies's original problem of finding radically new laws to generate specified complexity thus turn into the slightly modified problem of finding find radically new laws that generate apparent-but not actual-specified complexity in nature? If so, then the scientific community faces a logically prior question, namely, whether nature exhibits actual specified complexity. Only after we have confirmed that nature does not exhibit actual specified complexity can it be safe to dispense with design and focus all our attentions on natural laws and how they might explain the appearance of specified complexity in nature.
[End Quote - WA Dembski, Meta 139: Dembski on "Explaining Specified Complexity"]

In "The Design Inference", Dembski claims that we can examine the properties of an event and classify it as being due to "regularity", "chance", or "design". We need only the event itself and some side information by which a specification may be formed. Under Dembski's Design Inference, information about the cause of the event is not needed. This is important to Dembski's argument because Dembski wants us to conclude "design" for an event and then infer "intelligent agency" in cases where we have no information about the "intelligent agent" which may have caused the event in question.

In Dembski's examples of TDI involving known agent causation, it is clear that the known causal stories are ignored. They are not submitted to his Explanatory Filter as possible "regularity" or "chance" hypotheses. That Caputo cheated is not treated as either "regularity" or "chance". Plagiary is not treated as either "regularity" or "chance". DNA identification is not treated as either "regularity" or "chance". Mendel falsifying data is not treated as either "regularity" or "chance". These causal stories instead are treated as the basis for "specifications" and utilized in classifying an event as "due to design". [In an upcoming paper to appear in the journal, "Biology and Philosophy", John Wilkins and I develop the concept of "ordinary design", under which agents who we know something about are treated as causal regularities, not as instances of mysterious non-natural action.]

But in "Explaining Specified Complexity", Dembski does treat a known causal story as either "regularity" or "chance". The causal story in question is that of an evolutionary algorithm which yields a specified result in a small number of tries out of a large problem space. Here, Dembski tells us that the complexity of the result (found by reference of its likelihood of occurrence due to a "chance" hypothesis") is apparently large but actually zero, because the probability of the result given its known cause is 1.

As pointed out above, Dembski's TDI does not condone plugging in known causes in as "regularity" or "chance" hypotheses. At best, one might plug in a hypothesized cause that is identical to an actual cause. After all, some things are due to regularity and chance. But let's consider what follows from this change in operation between TDI and "Explaining Specified Complexity".

We have two events, each yielding a solution to a 100-city tour of the Travelling Salesman Problem. (I select this problem as an example because it has well-known characteristics and I have been using it since 1997.) In one event, we know that a human agent has toiled long and hard to produce the solution. In the other case, a genetic algorithm was fed the city distance data and spit out the same solution (or an equivalent approximate solution) some time later. We will now apply the Design Inference from TDI (TDI_TDI) and the Design Inference as modified in "Explaining Specified Complexity" (TDI_ESC).

For TDI_TDI, the known causal stories are irrelevant. Thus, both events are treated identically, which is to say that our speculations concerning how these events occurred may be the basis for specifications, but otherwise do not impinge upon our analysis. We eliminate "regularity", since these are not high probability events. We eliminate chance, because these are not simply intermediate probability events. We conclude that the events are due to "design" because they are both "small probability" (and in fact meet Dembski's universal small probability bound) and are "specified" as the shortest closed loop path that visits each city once. Both events are classed as having "specified complexity".

This is not the case for TDI_ESC. Now, there is an asymmetry in how we treat the two events based upon our knowledge of the causal stories. For the solution given by the human, we again decline to utilize our knowledge of causation, and things proceed as for TDI_TDI, and we find the solution is due to "design". Not so for the solution produced by GA. There are, in fact, two possible alternate ways in which this event may be processed which deny placing it in the "due to design" bin.

The one explicated by Dembski in "Explaining Specified Complexity" goes like this. First, regularity is eliminated; the event is not of high probability. Second, we consider chance hypotheses and find our complexity estimate thereby. We submit as a chance hypothesis the known causal story: the result was obtained by operation of a genetic algorithm. Unsurprisingly, when we know that an event is due to a particular cause and we use that cause as a "chance" hypothesis, we find that the event is "due to chance". And because under TDI_ESC we base our complexity measure upon the likelihood of occurrence due to the relevant chance hypothesis, we find that the probability of the event given our "chance" hypothesis is high, and thus the complexity is very low indeed. But even this is inconsistent with Dembski's discussion of complexity measures in TDI, where Dembski asserts that complexity measures are measures of difficulty, and that information measures precisely encapsulate this notion. The difficulty of the problem does not change depending upon the process solving it, which is what Dembski implies must be the case with this argument.

The second possible way to eliminate the event yielded by genetic algorithm is to treat the operation of the genetic algorithm as a regularity. In this case, we again use our knowledge that the event was caused by a genetic algorithm. We note that genetic algorithms are capable of solving problems of this apparent complexity, and class the solution as being due to the regularity of solution by genetic algorithm. Again, our classification is unsurprising, since we applied our known causal story to a decision node in the Explanatory Filter, we also find that our known causal story explains the event.>/b>

In either of the above ways of avoiding making a successful design inference for the solution produced by genetic algorithm, we apply knowledge of the cause of the event differently from when we know that the cause is an intelligent agent. In the case where an intelligent agent is known to act, we are told that the event represents "actual specified complexity". In the case where an algorithm is known to have produced the event, we are told that the event represents "apparent specified complexity". Note that "apparent specified complexity" is established only because we have knowledge of the causal process and use it differently from the analytic method given in TDI.

To clarify why these cases indicate problems for making Design Inferences, consider an event where we are shown a solution or approximate solution to a 100-city TSP, but we are not given any information as to the causal story. We do not know whether an intelligent agent or some algorithm worked out this solution; we merely have the solution and our knowledge of the TSP problem in general. According to the procedures and logic given in TDI, we can make a reliable inference of "design" given just that information. And as indicated before, this event when analyzed according to TDI_TDI is classified as "due to design". We now have a problem: The event is "due to design", but it may not mark the work of an intelligent agent in producing it. This is a challenge to the claim that TDI gives us a reliable method of inferring the action of intelligent agents. Because the same event could have either "apparent specified complexity" or "actual specified complexity", we find ourselves exactly where we were before having used TDI. The mere fact that an event has "specified complexity" does not enable us to reliably infer the action of an intelligent agent in producing that event.

One way of approaching this challenge is to repudiate the claim that there is any such split between "apparent specified complexity" and "actual specified complexity". This would preserve the concept of "specified complexity" as possibly having some bearing upon marking the action of intelligent agency, rather than simply being a complicated piece of rhetoric whose content is solely a long-winded way of begging the question. Since the only effects of "apparent" vs. "actual" specified complexity categories are to cast doubt upon the logical framework and methods of the Design Inference, repudiating it seems the clear way to proceed. But then there is still the problem that human and algorithm may produce identical events that are tagged as having "specified complexity".

When "apparent" vs. "actual" specified complexity is repudiated, the residual problem may then be approached by claiming that whenever an algorithm is the cause of an event having the property of "specified complexity", that we may infer that an intelligent agency designed and implemented the algorithm, and that the production of events by such algorithms is in each case to be considered "intelligent agency by proxy" (IABP). [I'll note that various correspondents produced the concept of "intelligent agency by proxy" as a means of defending some of Dembski's arguments, and that the general thrust of the argument corresponds to Dembski's replies to questions after his presentation at the 1997 "Naturalism, Theism, and the Scientific Enterprise" conference.] Thus, whenever "design" is found, we are assured by the Design Inference that an intelligent agent operated, either to produce the event proximally, or to produce the process by which the event occurred ultimately.

There are further problems that ensue from use of IABP, but these are primarily relatively simple inconsistencies between some of Dembski's claims made outside of "The Design Inference" and those covered within that book. In other words, retaining the "apparent" vs. "actual" specified complexity distinction entered by Dembski logically invalidates the Design Inference (it is somewhat ironic for an author to vitiate his own work), while dumping it and adopting IABP yields a revised form of TDI which is still arguable.

Now I will consider what adoption of IABP implies for the Design Inference.

First, IABP invalidates Dembski's claim in "Intelligent Design" that "functions, algorithms, and natural law" cannot produce specified complexity aka "complex specified information". Instead, functions, algorithms, and natural laws which are produced by intelligent agents and which act as proxies for those agents are stipulated to have the ability to produce events with specified complexity. This leaves the interesting question of whether functions, algorithms, or natural laws exist which do not require an intelligent agent for their instantiation which nevertheless are capable of producing solutions with the "specified complexity" attribute.

Second, IABP means that the method of the Design Inference cannot distinguish between direct proximal action of an intelligent agent in producing an event and indirect action via proxy one or an infinite number of steps removed. Once a process has been made by an intelligent agent as a proxy, whatever events it might produce henceforth would then be capable of yielding events with specified complexity. There is no basis in the Design Inference for distinguishing between two events, one produced directly by an intelligent agent, and an identical one produced by that agent's proxy. Consider the TSP example given above. A human can produce a genetic algorithm that solves TSP problems. The same human can work TSP problems even as his algorithm is employed doing the same thing. As long as each is working properly, they may both produce solutions (or equivalently close approximate solutions) to TSP problems. The Design Inference can only detect "specified complexity", and thus cannot tell us whether any particular TSP solution was produced by the human or by his algorithmic proxy.

Third, IABP undermines Dembski's position taken in "Intelligent Design" that attributing processes rather than contrivances to the intelligent agency of God is an error. Because one can examine a contrivance as an event via TDI, but the results are ambiguous with respect to whether the contrivance's specified complexity is due to God's direct intervention in producing the contrivance or due to God's indirect causation through one or an infinite number of steps removed via a function, algorithm, or natural law set up as a proxy process, one cannot distinguish via TDI whether God acts directly or not for any particular contrivance examined.

Fourth, IABP implies that the strongest theological claim that can be predicated upon the Design Inference is a version of Deism wherein the Deist God undertakes creating a complete set of proxy functions, algorithms, and natural laws which result in the universe and life as we know it. Specifically, the Design Inference is incapable of asserting a direct intervention of God in forming irreducibly complex biological systems. Displacing a hypothesized instance of the action of natural selection in adaptation is conceptually beyond the reach of the Design Inference or "specified complexity". At best, on the basis of the Design Inference alone under IABP it could be claimed that the concept and implementation of natural selection is due to God, not that it was not operative as a proxy for God.

In conclusion, the principle of "intelligent agency by proxy" helps save the Design Inference from the logical collapse necessitated by adoption of the distinction between "apparent specified complexity" and "actual specified complexity", but imposes certain costs of its own. In particular, several of the auxiliary statements about the Design Inference made by William Dembski in his popular writings would have to be set aside. These include the claim that "functions, algorithms, and natural law" cannot produce events with specified complexity, and that identification of specified complexity for biological systems implies that natural selection was not operative. IABP and the Design Inference can be used theologically as an argument for the existence of a God with Deist properties. Stronger arguments than that will have to be justified independently. To paraphrase Dembski quoting Eigen, the task of Intelligent Design proponents is to find arguments that set aside natural mechanisms as being in principle capable of being the cause of events with the property of specified complexity. To paraphrase Dembski critiquing Eigen, Dembski's insight is to recognize that algorithms and natural law pose a threat to such in-principle exclusionary arguments, and Dembski's mistake is in thinking that his Design Inference on its own is capable of doing the work of such an exclusionary argument.