We unconsciously construct mental models of the world and these models help aid our thinking.
This idea is not new. In fact, in 1943 Kenneth Craik proposed that thinking is the manipulation of internal representations of the world in his book The Nature of Explanation.
“This deceptively simple notion,” argues Philip Johnson-Laird in Mental Models, “has rarely been taken sufficiently seriously by psychologists, particularly by those studying language and thought.”
They certainly argue that there are mental representations — images, or strings of symbols — and that information in them is processed by the mind; but they ignore a crucial issue: what it is that makes a mental entity a representation of something. In consequence, psychological theories of meaning almost invariably fail to deal satisfactorily with referential phenomena. A similar neglect or the subtleties of mental representation has led to psychological theories of reasoning that almost invariably assume, either explicitly or implicitly, the existence of a mental logic.
Explanation depends on understanding. If you don't understand something you cannot explain it. Although what is explanation? “It is easier to give criteria for what counts as understanding than to capture its essence — perhaps because it has no essence,” writes Johnson-Laird.
This will no doubt strike many of you as fuzzy. Justice Stewart found it impossible to formulate a test for obscenity but nevertheless asserted, “I know it when I see it,” so can we, in an inexact, yet useful way, when it comes to explanations.
Explanations certainly require knowledge and understanding. Johnson-Laird writes:
If you know what causes a phenomenon, what results from it, how to influence, control, initiate, or prevent it, how it relates to other states of affairs or how it resembles them, how to predict its onset and course, what its internal or underlying “structure” is, then to some extent you understand it.
The psychological core of understanding, I shall assume, consists in your having a “working model” of the phenomenon in your mind. If you understand inflation, a mathematical proof, the way a computer works, DNA or a divorce, then you have a mental representation that serves as a model of an entity in much the same way as, say, a clock functions as a model of the earth's rotation.
This is where Kenneth Craik comes into the picture. His 1943 book The Nature of Explanation was one of the first, if not the first, to propose that human beings think by manipulating internal representations of the world. This manipulation — or reasoning — involves three distinct processes:
1. A translation of some external process into an internal representation in terms of words, numbers, or other symbols.
2. The derivation of other symbols from them by some sort of inferential process.
3. A re-translation of these symbols into actions, or at least a recognition of the correspondence between these symbols and external events, as in realizing that a prediction is fulfilled.
In The Nature of Explanation Craik writes this beautiful passage:
One other point is clear; this process of reasoning has produced a final result similar to that which might have been reached by causing the actual physical processes to occur (e.g. building the bridge haphazard mid measuring its strength or compounding certain chemicals and seeing what happened); but it is also clear that this is not what has happened; the man's mind does not contain a material bridge or the required chemicals. Surely, however, this process of prediction is not unique to minds, though no doubt it is hard to imitate the flexibility and versatility of mental prediction. A calculating machine, an anti-aircraft ‘predictor', and Kelvin's tidal predictor all show the same ability. In all these latter cases, the physical process which it is desired to predict is imitated by some mechanical device or model which is cheaper, or quicker, or more convenient in operation. Here we have a very close parallel to our three stages of reasoning-the ‘translation' of the external processes into their representatives (positions of gears, etc.) in the model; the arrival at other positions of gears, etc., by mechanical processes in the instrument; and finally, the retranslation of these into physical processes of the original type.
By a model we thus mean any physical or chemical system, which has a similar relation-structure to that of the process it imitates. By relation-structure I do not mean some obscure non-physical entity which attends the model, but the fact that it is a physical working model which works in the same way as the process it parallels, in the aspects under consideration at any moment. Thus, the model need not resemble the real object pictorially; Kelvin's tide predictor, which consists of a number of pulleys on levers, does not resemble a ride in appearance, but it works in the same way in certain essential respects-it combines oscillations of various frequencies so as to produce an oscillation which closely resembles in amplitude at each moment the variation in tide level at any place. …
My hypothesis then is that thought models, or parallels, reality — that its essential feature is not ‘the mind', ‘the self', ‘sense-data’ nor propositions but symbolism, and that this symbolism is largely of the same kind as. that which is familiar to us in mechanical devices which aid thought and calculation. …
If the organism carries a ‘small-scale model' of external reality and of its own possible actions within its head, it is able to try out various alternatives, conclude which is the best of them, react to future situations before they arise, utilize the knowledge of past events in dealing with the present and future, and in every way to react in a much fuller, safer, and more competent manner to the emergencies which face it. Most of the greatest advances of modem technology have been instruments, which extended the scope of our sense-organs, our brains or our limbs. Such are telescopes and microscopes, wireless, calculating machines, typewriters, motor cars, ships and aeroplanes. Is it not possible, therefore, that our brains themselves utilize comparable mechanisms to achieve the same ends and that these mechanisms can parallel phenomena in the external world as a calculating machine can parallel the development of strains in a bridge?
Small models of reality need neither be wholly accurate nor correspond completely with what they model to be useful. Your model of an iPhone may contain only the idea of a rectangle that serves multiple functions such as sending and receiving data, apps, displaying moving pictures with accompanying sound. Alternatively, it may consist of an understanding of the programming necessary to make the device work, the protocols, the physical limitations, and how the display actually functions, in which case you've eclipsed me. Your model may be deeper still, into the hardware and how it works, etc. A person who repairs iPhones is likely to have a more comprehensive model of them than someone who only operates one. The engineers at Apple are likely to have a richer model than most of us.
What must be questioned now is whether adding information increases the usefulness of the model. If I explain how operating systems and API's work, you will have a much richer model of an iPhone. For some of you that will mean a more useful model and for some it will not.
“Many of the models in people's minds are,” Johnson-Laird writes, “little more than high-grade simulations, but they are none the less useful provided that the picture is accurate; all representations of physical phenomena necessarily contain an element of simulation.”
So the nature of an explanation is to understand something – to have a working model of it. All explanations are incomplete because at some point they all must take something for granted. When you explain something to another person, “what is conveyed is a blueprint for the construction of a working model.”
Obviously, a satisfactory blueprint for one individual may be grossly inadequate for another, since any set of instructions demands the knowledge and ability to understand them. … In most domains of expertise, there is a consensus about what counts as a satisfactory explanation — a consensus based on common knowledge and formulable in public criteria.”