Imagine a child asking her father how a car works. “What makes it go?” she asks him. He tells her that inside the car, there is a smaller car driving on a treadmill, which is connected to the wheels of the bigger car. As the little car drives on the treadmill, the big car drives on the road.
Of course, he is just teasing her with an absurd explanation. Cars don’t have little cars inside them, but even if they did, this “explanation” isn’t really an explanation. To explain what makes the big car go, he would have to explain what makes the little car go. The explanation presupposes what it is supposed to explain: cars that go.
But it has the form of an explanation, and it might trick the little girl into believing that her question has been answered.
A more typical example of the homunculus fallacy is the use of a “little man in the head” to explain aspects of consciousness. For example, someone might “explain” vision as “the mind’s eye is looking at a screen in the brain”. This is not an explanation of vision, because it presupposes vision. Even if this described some brain process metaphorically, it would not explain vision. We would now have to explain how the “mind’s eye” can “see”.
An explanation of vision should be in terms of lower-level mental processes that are mechanistic and do not presuppose vision. An explanation of how a car moves should involve objects and processes (pistons, gears, combustion, etc.) that are not cars.
Explanations typically describe hidden mechanisms and invoke general principles that define how those mechanisms work. For example, we can explain the movements of the planets by a hidden mechanism (gravity) that operates by general principles (the law of gravitational attraction and Newton’s laws of motion). This explanation not only allows us to predict the movements of the planets, it also relates them to many other things, such as a stone falling on the Earth. It shows how something specific (planetary motion) is an instance of something very general (gravity and motion). Newton’s theory of gravity and motion has explanatory power, because it can be used to generate such explanations for many things. It reduces complexity by describing and predicting many things with a small number of abstract concepts.
Of course, one can always ask “Why does the world work that way?”. We could seek an explanation for Newton’s theory. We could try to reduce it, and other theories, to an even more general theory. But we would always be left with some theory, and we could always ask for a deeper explanation.
The homunculus fallacy is a pseudo-explanation: an explanation that does not explain, because it does not reduce what it tries to explain to a more abstract level of description. Instead, it posits a hidden mechanism at the same level of description. This hidden mechanism is typically what it supposedly explains. (The little car inside the bigger car, for example.) Normally this is done through a metaphor, and the person committing the fallacy is not aware of their error. They may find it hard to recognize the error, because the metaphor has become entrenched in their way of viewing things.
It is very common to think of human motivation and action as pursuing pleasure and avoiding pain. In this view, we want pleasure and we don’t want pain, so we act to pursue pleasure and avoid pain. This view is very intuitive. However, it is based on a homunculus metaphor, and it contains a homunculus fallacy.
Pursuit and avoidance are behaviors. They presuppose motivation and action conceptually. Behavior is what a theory of human motivation and action is supposed to explain. So, we cannot explain human motivation and action with the concepts of pursuit and avoidance. The use of “pursuit” and “avoidance” to describe mental processes is a metaphor. This metaphor is misleading, because it presupposes motivation and action.
We can think of the motivation system as a little man in the head who chases pleasure on a treadmill, and runs away from pain, but this gets us nowhere. Even if this accurately described something about the brain, we would still have to explain why the little man wants pleasure, and why he doesn’t want pain. We would have to explain his motivations.
To explain human motivation and action, we must describe a mechanism that does not have the properties (motivation and action) that we want to explain, and that mechanism should operate by more general principles.
Motivation cannot be explained in terms of motivation. Consciousness cannot be explained in terms of consciousness.
The homunculus fallacy “explains” a system by positing a part of the system that has the properties of the whole system. It conceptually presupposes what it claims to explain.
There is an inverse of the homunculus fallacy. The inverse homunculus fallacy “explains” a system by positing that it is a part of a bigger system that has the properties of the smaller system. It also presupposes what it claims to explain, but by projecting a part onto the whole, rather than the whole onto a part.
The classic, absurd example of the inverse homunculus fallacy is the “world-turtle”: the theory that the world sits on the back of a giant turtle.
See World Turtle on Wikipedia.
We can imagine how this sort of pseudo-explanation arises. A little girl asks her father “what holds the world up?”.
This is a perfectly reasonable question for a little child or an Indian peasant in 500 AD. In ordinary life, objects fall unless they are held up by something, or actively kept up by some process, such as wings flapping. So, our ordinary intuition is that things fall. Today, most people know that the Earth is a ball in space, and that up and down are not absolute directions, but are relative to the center of the Earth. Without this modern knowledge, it is quite natural to think of the Earth as unmoving, and up and down as absolute directions. It is also quite natural to wonder what lies below.
The father of the little girl tells her “The world is sitting a turtle’s back”.
Of course, this explains nothing. A turtle is an object that exists on the Earth. Turtles are normally held up by the ground or by water when they are swimming. They have something underneath them. They need air, water and food. The idea of a turtle presupposes the Earth as a whole. Our ordinary assumptions about how the world works are hidden inside the concept of a turtle. But those ordinary assumptions are what we are trying to explain.
The little girl might ask “What holds the turtle up?”. The philosophical joke-answer is that it is turtles all the way down. The father could say that the turtle is standing on another turtle. Or he might say that the turtle is standing on the surface of a bigger world. Either way, the “explanation” does not explain.
A turtle-believer might insist that the turtle theory does explain things. For example, it explains earthquakes: those are caused by the turtle moving. But then why aren’t earthquakes always happening? Why do they happen in specific places? What are volcanoes caused by? Why aren’t there periodic floods when the turtle goes for a swim? And so on. Just because a theory is consistent with some (cherry-picked) data, that doesn’t mean the theory has explanatory power. It must reduce the complexity of the data to have explanatory power.
God is another example of the inverse homunculus fallacy. Supposedly, the concept of God explains various things, such as the existence of the Earth, the existence of life, human nature, rationality, etc. But the concept of God presupposes those things. God is a metaphorical person. He has the properties of a human being. He exists. He is alive. He has desires. He thinks and acts.
The Christian notion of God is analogous to the world-turtle. It tries to explain the whole (the universe) in terms of something that exists within the universe: a person. Of course, the imaginary person of God is BIG in various ways, just as the turtle is big. God doesn’t just think and act. He is omniscient and omnipotent!!
One could use the same trick with the world-turtle, by claiming that the turtle is infinitely big. Then there would be nothing underneath the turtle. It would be “turtle all the way down” instead of “turtles all the way down”. Of course, it still wouldn’t explain anything.
Like turtles, human beings exist on the Earth, and our properties are tied to our way of existence. Mental operations, such as thought, desire and action, are ways of solving the problems of living beings. They are adaptations. It makes no sense to extend them to infinity, or attribute them to a universal being.
If God is omnipotent, then there is no distinction between his will and reality. But will is the desire to change reality. Omnipotence is an absurd notion, or it is just a confusing way to say “causality”. Likewise for omniscience. If God is omniscient, then there is no distinction between his knowledge and reality. But knowledge is limited information about reality, not reality itself. Just as “up” and “down” are only meaningful on the Earth (or some other large body), “knowledge” and “will” are only meaningful for limited beings, such as us.
See To EyesWideOpen for more about the conceptual incoherence of the God-concept.
If we eliminate the human properties of God, exchanging “will” for “causality” and “knowledge” for “existence”, then we have exchanged “God” for “reality”. Any apparent explanatory power of the God-concept comes from conceptual question-begging, through the metaphor of God as a human being.
The cosmological arguments for God involve the inverse homunculus fallacy. They insist that we must posit God to explain reality, but their notion of God presupposes the properties that are “explained”. Just as the world-turtle smuggles in ordinary assumptions about objects and up | down, God smuggles in ordinary assumptions about reality.
“All is mind” is a more modern, sophisticated version of the world-turtle. Supposedly, the human mind is explained by claiming that everything is mind. This uses the human mind as a metaphor for the whole of reality. Of course, the metaphor smuggles in all the properties of the human mind. The great virtue of this “metaphysical theory” is that it “explains” the human mind. But of course it does no such thing. It simply shifts the problem of explanation to reality as a whole, just as the world-turtle shifts the problem of what holds the world up to what holds the turtle up.
“The universe is a computer (program)” is another example of the inverse homunculus fallacy. It applies the metaphor of a computer to the universe as a whole. But of course a computer is something that exists within the universe. It is a mechanism that operates according to physical laws. So, nothing is explained by positing that the universe is a computer or a computer program.
Subtler versions of these fallacies might be involved in certain scientific theories and problems. For example, the “selfish gene” is a homunculus metaphor, and (as I argued in Debunking the Selfish Gene) a misleading one.
We can’t avoid using metaphors, and metaphors will normally be at the ordinary level of description. We’re often projecting ordinary things “upward” or “downward” to explain things at larger or smaller scales. People can create circular explanations with metaphors, and such errors can be difficult to spot.
Many people (and not just little girls) have been tricked by these fallacies. To avoid them, you have to think carefully about the presuppositions of the concepts that you use.
> “The universe is a computer (program)” is another example of the inverse homunculus fallacy. It applies the metaphor of a computer to the universe as a whole. But of course a computer is something that exists within the universe. It is a mechanism that operates according to physical laws. So, nothing is explained by positing that the universe is a computer or a computer program.
Of course, "world is a simulation" doesn't explain much. That it is not explanatory doesn't mean it's not true (doesn't mean it is, either). It's somewhat unfair to compare it to God or turtles, since people who bring it up usually are not claiming it does.
Inside our very real computers, we do a ton of virtualization nowadays. Computers inside computers inside computers. Also, we run stuff on them, including virtual realities, because there are reasons to do so.
Ofc occam's razor cuts computer out of the equation, so it makes no sense to believe it's a simulation without evidence. It's not pointless to think about the possibility, tho, for various reasons. One of them is that physics doesn't stop us from, at the very least, hooking the brain's I/O to a computer. We can make "my world is a simulation" true for some person. Well, with some more tech progress.
We can't really simulate our universe inside our universe... probably. But then, in case we are in that person's (from previous paragraph) shoes -- the outer universe is a complete mystery. It doesn't need to be in any way similar to ours.
--------
Selfish gene is just an anthropomorphizing analogy. Mechanics of evolution just map well to the concept. Obviously genes are just dumb molecules at the end of the day, with no conscious desires.
Anthropomorphizing stuff sometimes leads to errors (e.g. when thinking about AI it's easy to anthropomorphize a little (or a lot) too far), but it's not inherently wrong.
I didn't read your book tho, so I'm not saying it's pointless to attack the framework. Maybe sth else makes for a better model/framework than using concept of selfishness.
I didn't really finish these arguments, but I need to go to sleep now.
I've been ruminating on an idea that is closely related to what you covered in this article. Although I believe the general argument of the article is true, I don't think it ultimately solves the problem of infinite regress. First, I'm going to explain my perspective on this question.
All of human thinking and organization which has produced any results worth considering have been systematic in nature (science, logic, mathematics, language, law, arguably our very biology, etc.). I define a system as a (potentially arbitrary) section of reality. This section of reality is necessarily founded upon fundamental properties which cannot be accounted for or explained by the system itself; the incompleteness theorems of Gödel are great examples of how this general principle can be proven. From this base, operation between the properties can produce forms which may continue to operate with each other until an incredibly complex state is eventually achieved. Since this system is a section of reality, the fundamental properties must be explained in relation to other systems in reality which have properties not entirely shared by the system they yield, otherwise there would be no distinction between these systems and they would simply dissipate into an absolute reality that we could not even begin to quantify or understand. This observation is often referred to as "emergence," but I tend to avoid using that term since too many people assume it seeks to explain a mechanism instead of describing initial and final states. Therefore, trying to use the lens of chemistry to explain its fundamental properties, such as protons, without making reference to other systems such as quantum physics (where the answer lies in quarks) would be absurd.
The problem arises when we try to figure out if this use of systems to model reality can ever arrive at a final and unassailable answer. Specifically, if all systems must ultimately be justified in terms of other systems, how can we ever have a total explanation of everything? If we treat all of reality as a system, which I believe is unavoidable, then we would need to justify it in terms of something outside of everything. The contradiction here is quite obvious and heralds back to the classic "what came before God?" argument since we will constantly need to explain a foundation of belief in terms of something else which has a foundation that need to be explained and so on and so forth.
Human civilization has continued for thousands of years after Agrippa's death, but still hasn't managed to solve his Trilemma. A potential argument given this outline is that notions of systems and properties themselves are just arbitrary human classifications. However, this simply dismisses the Trilemma instead of solving it. Additionally, the use of logic, language, and even thought itself to dismiss the Trilemma is bound to fail since these human tools are themselves subject to the same systematic nature that I explained before which will always arrive to this seemingly unsolvable conundrum: a conundrum that is at the very base of the systems you try to use to deny it.
These concerns may be unnecessary for the current time. There is still so much work to be done in exploring the systems we have now while also establishing a greater link between them. Not only do we need to gain a better understanding of disciplines such as genetics and quantum physics, but we must establish a chain of causality so tight and well-modeled that the heritability estimates produced by models using genes as axioms can instead be replaced by quarks as the axioms. Obviously we are far from this, but improvement in the world of molecular biology is one step in the right direction. All of living and thinking seems to be an act of faith to some extent, but it's worked to our advantage so far so we may as well continue to work with this lack of complete certainty until we arrive at a superior manner of approaching truth which we may not even be able to conceive of yet.