❧ 1. The Tyranny of the Pointless Details. Or, Some of My Best Friends are Pseudo-Scholars
❧ 2. Boredom as Pseudo-Scholarship
❧ 3. Pseudo-Scholarship as Cowardice
❧ 4. Cowardice Masked as Pomposity and Hostility
❧ 5. Cowardice Masked as Dedication and Loyalty
❧ 6. Cowardice Masked as Humility
❧ 7. Cowardice Masked as Responsibility
❧ 8. A Favorite Sport: Kicking Colleagues While They are Lying Down
❧ 9. The Natural History of Intellectual Courage
❧ 10. Intellectual Courage in Education and in Research
We have come down to business; so pull up your sleeves and settle down comfortably. We are in for some vaguely systematic study, so if you find here something open to criticism or otherwise thought-provoking, jot down a note about it in your note-book or on your writing-pad—it is always advisable to have pencil and paper ready during engagement in serious work (as all those who are mathematically trained know very well)—and push on: you should return to it later.
First, let us examine—as a possible cause of a serious intellectual malady—the widespread philosophical repudiation of sweeping generalizations as dangerous in their very appeal. It is the philosophy that advocates detailed dry scholarly serious attention to boring detail. This idea renders the most delightful activity, which is study, into a depressing, repulsive, nerve-wrecking chore. This is not in the least to declare all social malady, even all ills of Academe, to have the same cause, of course; but it is to contradict some very popular social doctrine that entail non-specific nosology or any other.
“Nosology” is the Greek for theory of disease. Its non-specific versions share one characteristic: each ascribes one cause to all ills. In almost all cultures, the dominant nosological view is non-specific; it is usually some doctrine of imbalance, of lack of harmony between different parts or functions of the body. In the West, the influence of Greek medicine made non-specific nosology the widely accepted fashion. Hippocrates and Galen and all that. The eighteenth century saw the last firework of non-specific nosology with a number of detailed theories of imbalance following each other in quick succession. In a sense even Louis Pasteur, “the” father of modern medicine, advocated a non-specific nosology: though he knew that different kinds of bugs cause different kinds of illness, and this way he deviated from non-specific nosology somewhat; yet his germ-theory of disease, his view that the cause of all illnesses is external (usually of parasites of one sort or another) is rather non-specific. (In his view, the body is usually in a sufficient balance to take care of itself, unless an outside factor interferes with that balance. This is how Claude Bernard understood Pasteur. He said, this theory is insufficient since it does not help differentiate between the case in which the body overcomes the invader and the case where there is a struggle between them. Pasteur had to acknowledge this as in response to Bernard he recognized the body’s poor constitution’s contribution to illness.) Pasteur’s germ theory was the last non-specific nosology in the Western medical tradition. Even though Robert Koch and John Hughlings Jackson refuted his doctrine at once, it won and maintained popularity because it was non-specific and because it engendered a powerful, fruitful scientific research program. (One of the last followers of the Pasteur research tradition was Saul Adler, FRS, who saved my life when I was an infant and struck by a tropical disease.)
It is hard to imagine a new non-specific nosology arising in this day and age, when so much is known of great varieties of parasites, of congenital (inborn) diseases, auto-immunization (immunization against one’s own organs, leading to a kind of civil-war) and allergies (over-immunization, a kind of a McCarthy-style hysteria in the body’s defense system), and other ills of over-defensiveness (like pneumonia, where the lungs are flooded with water in defense against infection risking the possibility of patients choking to death), and simple deterioration of organs from over-work (the liver, the kidney, and perhaps even the brain). Yet this may be merely the absence of a good theory of health, the lack of imagination and powers of synthesizing diverse elements into one idea. The history of thought should make us wary of arguments from our own lack of imagination and restore our faith in human resourcefulness or in our hope that it will keep growing.
This is of general importance: sheer dogmatism and intellectual cowardice stand behind the opposition to a bold and brilliant synthesis—not that dogmatism and cowardice are essentially different: they usually intertwine—and then, after the synthesis has gained currency, they may favor it for the very same reason. (Gertrude Stein went so far as to say, the establishment recognizes a new idea only in order to block recognition of a newer one. This is dazzling but not always true.) Two aspects to proper methodology defy popular attitudes to novelty. First, popular prejudice today opposes adherence to any doctrine that is vulnerable to criticism. Second, intellectual cowardice is seldom noted, and hardly ever as a defect. The exceptions are cases where, due to very special circumstances, intellectual cowardice leads to dogmatism and to other qualities that happen to be objectionable regardless of the question of intellectual courage altogether. Let me expand a little.
Popular prejudice tends to oppose any brilliant synthesis, especially those that commentators have already rejected and condemned off-hand. Even those who have originated an idea and advocated it prior to its having been found empirically wanting, tend to belittle it after it was refuted. This is a tenaciously unhistorical, condescending philosophy. Dismally, there is something admirable and bold in it: a theory that is objectionable today is objectively objectionable; it is objectionable regardless of knowledge or ignorance of the objections to it: proper objections are as timeless as truth and falsity are. Take the theory that all swans are white; the truth that some swans are black refutes it, independently of our knowledge or ignorance of the facts. Whether anyone had ever observed any black swan or not, the objectivity of the existence of black swans and of its conflict with the theory that all swans are white makes that hypothesis as false now as it ever was. Nevertheless, to censure those who declared all swans white on this ground is to censure them for not having travelled to Australia in search of black swans: it is unfair, condescending, and unhistorical. Hence, objection or criticism are not the same as hostility or censure: we may criticize objects of our admiration and still admire them. This became clear when Einstein’s deviations form Newton became standard (Popper). Einstein explained his (correct, for all we know) view that Newton is the greatest scientist of all times.
All this deserves much study and elaboration; it offers cures to endless academic agonies of a great variety. Leading sociologist Max Weber said, a researcher who poses a hypothesis should feel the risk of being struck by lightning in case it is false. How much nicer was the declaration of Heinrich Heine that he would always retain his right to admit error! Popular prejudice considers censurable the advocacy of an objectionable theory. Consequently, serious people who consider a given theory true may find very disturbing any objection to it. They may think that this amounts to censuring them. The critics may hold the popular prejudice and indeed censure openly the advocates of the theory that they criticize. Balanced people may then simply dislike the injustice of the censure; in that case, they may say that the critics grossly exaggerate their criticism because their censure is exaggerated and because they confusedly identify criticism with censure. They may claim that the censure is just but that it aims the wrong target. Imbalanced people may open a venomous counter-offensive out of the pain of feeling both the justice and the injustice of the critics’ comment, and of their inability to sort these out—perhaps in ambivalence.
Strangely enough, many who adhere to the popular prejudice in question are well aware of this kind of trouble. They even consider this kind of trouble inevitable and therefore condemn the origination of any brilliant synthesis as the cause of much trouble to avoid—even by suppression: they are ready to suppress controversial papers. Edward Jenner’s great work on inoculation was rejected by the Royal Society as too controversial (indeed it was argumentative in character)—for his own good, of course.
Consider the table of chemical elements of John Newland (1864), his laws of octaves, and the table of Dmitri Mendeleev (1869). Why do we ascribe the table to the latter rather than to the former? The staple answer is, the former is inaccurate. So is the latter, although it is more accurate than the former. There are earlier versions of the table of elements—of the seventeenth and eighteenth centuries; the latest table is much more recent. Indeed, before the discovery of the neutron (1933) the table had no proper theoretical basis. (Since then, an element is characterize by both its atomic number that is the number of its protons in every atom of it, and its atomic weight that is the number of its protons plus neutrons in it.) Under the influence of Einstein’s methodology, a time-series of such tables shows how the criticism of an earlier version led to the later one. There is no room in this methodological view for takin criticism personally—as censure or as anything else personal but gratitude, for the contribution to the advancement of science.
Alas! The most eloquent fact about facts is that facts are silent, or, if you prefer, they tell different people different stories. Herbert Butterfield, the famous Cambridge historian, said, “History is a whore and a harlot”: historical records offer support and comfort to all sorts of viewpoint by their various details. Obviously, a fact may suggest often enough a theory in a manner rather objective in the sense that it is intersubjective (Kant) namely, that various observers will interpret the fact in a rather uniform manner. Now this objectivity or quasi-objectivity is not unique, and at time, however rarely, two quasi-objective readings are satisfactory. Strange! Some take this as a great defect and others as a great opportunity. Because of this absence of uniqueness we cannot speak of induction in any precise sense, yet we may still speak loosely of induction of sorts; the logic of it, however, is not that it settles controversy but, on the contrary, that it stimulates controversy. Inductivist thinkers like the great physicist Max Born replace the old adage, science allows for no controversy, with the adage, scientific controversies are short-lived. This may be misleading since scientific researchers may harbor metaphysical disagreements for centuries. (The metaphysical dispute between followers of Descartes and followers of Newton that began in the late seventeenth century was alive and very influential in research until the early twentieth century, when Einstein rendered both parties obsolete.)
The ability of a controversy to survive for centuries displeased inductivist thinkers like Born, since controversy blocks the ability of science to impose. It is valuable for the progress of science, though: controversy is the best way to raise curiosity. What distinguishes the broadminded from the rest—the narrow-minded, the uninterested, the unimaginative, the cocksure, the dogmatist, the coward, the parochial and all the rest of them—is obvious: in different frameworks, the same fact hints at very different theories. Facts may occasionally speak unequivocally against some brilliant synthesis; the curious may then feel the need for new ideas. This, briefly, is Popper’s philosophy of science, and a brilliant synthesis it is, thought provoking and useful, although in need of slight alterations. As facts cannot suggest a framework, they cannot suggest a theory: at a rare moment, a framework plus a fact may just do that. Therefore, when we invent a new brilliant synthesis we can start a new exploration of the facts. This is a paradigm-shift (Kuhn): facts alone will not do this for us.
Syntheses concerning social ills, then—you did not think I forgot the point from which I have digressed thus far only twice, did you? If you did, then you should learn to trust me just a little more; social ills may rest on brilliant ideas and some of these were progressive in their day. We have now in the offing a theory of socio-nosology that is less non-specific than any non-specific nosology proper. Yet popular prejudice supports, especially in the United States, some version or another of a very important and interesting social philosophy that entails a very important non-specific socio-nosology. This characteristic enables that doctrine to stand in the face of tons and tons of criticisms published annually against it. I am speaking of Marxism.
Oddly, lip-service is paid to Marxism in the East and to anti-Marxism in the West, but that in the East, where experiments in Marxism failed systematically and miserably, the people who can still think now think better, whereas in the West Marxism clouds many a judgment.
The other day a fleeting event shocked for a moment many honest thoughtful citizens all over the free world. Subsequently to rumors about peace-talks in Vietnam, the stock exchange rating plunge for a few hours. That was all. The event was transitory; in itself, it had no social, political, economic, or financial significance of any magnitude; nobody was worried about it. What honest and thoughtful citizens were seriously worried about at the time was the event not as a social factor but as a symptom of a very serious illness. Now diagnoses that most people accept are often extremely poor, but the symptom just reported underwent proper diagnosis, regardless as to how microscopic that diagnosis was. It should indeed have let all alarm-bells loose. The bare fact, that the price of stock fell in the New York Stock Exchange very slightly for a few hours, and was presumed to be the result of rumors that the Vietnam War was approaching its end. The slightness of the symptom is easily explicable by the weakness of the rumors. Let us admit this explanation. Now the diagnosis was that the Vietnam War and its perpetuation were contributors to economic boom that is thus waging war chiefly out of economic interest and neither out of political interest nor out of ideological considerations.
Possibly, the rumor is true. Rumors of peace may cause a momentary panic in any stock exchange independently of the manifest fact that peace offers better economic opportunity than war. Those who trade on the stock exchange ignore or oppose Marxism, yet they reflect their customers who may very well be susceptible to Marxist ideology; they may have thus panicked as unreasonably as gamblers with stocks often do. It is hard to impute consistent Marxism—to American brokers, or to their customers, or to American intellectuals, or to anybody else; the fact remains: a tinge of Marxism has caused a momentary panic in the stock exchange.
Marxist ideology claims that the root of all social causes is economic, so that all social ills are to blame on the malfunctioning of the economic system. It is an anti-intellectual philosophy, though it draws part of its strength from its being a brilliant synthesis that is an intellectual quality par excellence. It also draws its strength from its being a kind of objectivist theory in its seemingly total disregard of morality, especially of popular morality that is possibly (as Marx had stressed) mass-hypocrisy and a vulgar lip service of conformists. The allegation that popular morality is often popular hypocrisy is well known; the adolescent and the disgruntled much exaggerate it; they are subsequently more susceptible to Marxism than others are. This, again, is paradoxical: Marxism is recommendable for its a-moralism, which is a kind of anti-hypocrisy and thus a morality proper; just as it enjoys the praise of possessing the intellectual quality of objectivism, in spite of its being anti-intellectual (Raymond Aron, The Opium of the Intellectuals, 1962).
Do not let paradox fool you: when it is merely a paradox because it clashes with popular or private prejudice or because of a clever verbal formulation, paradox may be an asset; otherwise it is to be ignored: as in the case of Marxism: endorse a proper paradox and you have rejected logic, rationality, the very ability to think for yourself.
Apart from paradox, the stress on cynicism and hypocrisy is very much of an adolescent exaggeration: cynicism and hypocrisy are qualities very hard to adhere to, unless one is either a genuine scoundrel or a very superficial person down to the deepest levels of one’s personality. Modern mythology often describes cynics who become ideologically fired with noble feelings when it comes down to the essence of their humanity and who then atone magnificently for all their past cynicism and hypocrisy (The Mouthpiece; Casablanca; Stalag 17; Ride The High Country). Such mythology is very romantic and seldom true (historical novels of André Malraux); the truth is there, all right, but it is seldom romantic.
In his largely autobiographical novel Of Human Bondage, William Somerset Maugham tells a moving and thought-provoking story. The hero is a young English art student lost in a middling art school in Paris. The only two great lights around are phony, though he does not even suspect this. The one is a drunkard who pretends to be an art-critic to make poor art-students pay for his wine while he drinks and chatters. The other is an allegedly high-class successful painter who, for a handsome salary, comes to the art school on a rare occasion to drop a comment or a piece of technical advice to a budding painter to give a touch of class to the school. The young English student has his difficulties; he has his self-doubts; but he hangs on stubbornly. Another student in a similar though much worse predicament loses faith and commits suicide. The English student is greatly shaken by this and decides that he must seriously examine his own situation and either make serious a commitment to art or quit immediately. For this, he needs an impartial assessment of his potential as a painter. He goes to his two great lights, and, to his surprise, he gets the very same reaction from both. First, the appointed adviser dishes run-of-the-mill homilies. The English student makes it clear that he is in earnest. The adviser consequently tries to get rid of him as a pest. He insists. The adviser drops his phony mask, presents himself as a failure, and recommends to the English student to get out while the going is good. The advice over, the adviser promptly returns to his normal poise as if the session that just terminated has never happened. The English student understands and leaves.
This way Maugham offers a profound insight into the run-of-the-mill hypocrite and cynic. They are weak, but sincerely and fearfully wish to remain harmless; they prefer to drop their mask for a very brief while and be useful, rather than do harm, but only reluctantly: in their weakness, they prefer to remain inconsequential.
There is the disregard for weakness and the bent to remain inconsequential. It is a tribute to the exuberance of youth. My own youthful infatuation with Marxism, truth to tell, rested less on my ability to examine it than in my inability to assess the weakness of my philosophy teachers with whom I wished to discuss matters of consequence. They did not find my youthful flirt with Marxism shocking; I suppose their appraisal of it as transitory was correct; otherwise, they would have made an effort to prevent my rash, adolescent manner.
Some of my best friends and colleagues are like this: nice, inconsequential scholars, but able to rise to an occasion and stand by the side of a colleague or a student when the system lashes to them more hardship than their fair share and more than they can take. (Hence, often enough the apparent indifference of you elders and betters to your agonies is a kind of a compliment: they deem you able to survive them.) Moreover, their very weakness rests not on cynicism or hypocrisy as on a theory of scholarship and research that is dogmatic and cowardly. Even the cynics among them are often less cynical in significant matters than they would care to admit. Most of them, however, hold a poor rationale for their pseudo-scholarship or they lack interest in research. Whatever their rationale is, it plays an enormous role in their lives. Attack it, and they feel as deeply hurt as they could possibly be in impersonal matters like this: it is very much like attacking violently the doctrines of Marxism in front of an honest dedicated Marxist of long standing.
Any significant study of scholarship and research should help distinguish between the genuine and the fake. Current studies fail on this as they declare all punctilious research real. This is often false. This dismissal may upset some, namely, those who wish to suppress the doubt that they harbor deep down in their hearts: we all meet with occasions to question the theory that genuine science and genuine scholarship rest on abundance of detail. It is our old acquaintance, inductivism. The possibility that science does not rest on facts sounds somewhat irrational and frightening.
Let me take the claim for the rationality of inductivism first, and discuss later the fear that it is false. The first item, rationality, is the standard topic of discussion in the philosophy of science; the second item, fear, philosophes prefer to overlook, perhaps due to the fear to recognize fear.
People somewhat familiar with philosophy who hear the claim that science does not rest on detailed facts may hear the claim that science rests (not on facts but) on intuition or a priori reasoning or inspiration or whatever else it is that one may find within oneself to rely on. The tradition of Western philosophy includes disturbing memories of bitter disappointments in every kind of reliance. Since intuition is disappointing but science is successful, the argument goes, of necessity science rests not on intuition. Hence, it rests on experience. The insistence that science does not rest on experience either, obviously clashes with the fact that intuition is often disappointing. The insistence on both intuition and evidence being at times disappoint, seems in conflict with the acknowledgement that science is a success. It looks impossible.
The fact that some serious thinkers took both intuition and information seriously, raises the question, what did they do when the two clashed? Worse, the possibility of such a clash prevents using either as foundation. People who face this option usually conclude from it that science is a failure. Yet no one denies the success of science-based technology. This is why this success of science-based technology is often considered dangerous for the spirit, irrelevant to anything spiritual, irrelevant to serious questions. This is obscurantism, yet the disappointment in science did bring to it some scientifically minded people. Even answers to simple questions such as, is matter atomic or continuous, they say, can no more be expected to come from science that has disappointed us repeatedly by repeatedly switching between alternatives.
You cannot dismiss all this. Among competent philosophers, the situation is more complex but it amounts to the same. First, basing science on intuition has no advocates these days. Second, although the majority opinion is that science rests on evidence, it is not unanimous. Most dissenters hold that scientific theories are only tools: they are mathematical formulas accepted by mere convention; they are sets of drawers in which empirical information is neatly classified. They are the conventionalists or the instrumentalists. Their theory is one that the hostile to science gladly endorse. This then is an odd, counter-intended alliance between science and irrationalism. It prevents any sense of proportion and invites utter freedom whether to take any deviant information sufficiently seriously as to replace the system by another or to add a new drawer for it ad hoc. Here is a famous ad hoc, verbal innovation that took place in the second half of the twentieth century (Kuhn): call the system a paradigm and a change of a system a paradigm-shift and you have a new philosophy. To this verbal innovation raises the question, since paradigm shifts are ad hoc, who declares it? Answer: the scientific leadership. Question: who are these? Answer: those who declare a paradigm shift. Question: how can they do so? Answer: they fire any professor who does not obey them. Farewell to academic freedom!
The common to inductivism (the theory that science rests on experience) and conventionalism (the theory that science rests on mathematics) is that both reject the Kuhn-style tyranny of the administration (tacitly but firmly); rather, they sanctify the tyranny of the pointless detail. They justify science by justifying a specific form of pseudo-scholarship. This move appeals to scholars in the arts: they too can pile up details galore. No wonder then that some of my best friends are pseudo-scholars.
Science needs no justification; it has no foundation whatsoever, rational or empirical; and yet science is an important intellectual achievement with its own traditions and conventions. Usually, when I present such a view to my students they require that I justify it. They do not demand this of other traditions. This makes some sense nonetheless, since they consider only the scientific tradition rational. This is why such great rationalist lights as economist Frank Knight and physicist Erwin Schrödinger have declared that science has an irrational component.
>People who consider the justification or the grounding of a scientific theory important—regardless of whether they deem the value of scientific activity intellectual or technological—such people are required, by their own lights, to provide a justification or a grounding for their very demand for it. That you cannot justify justificationism without begging the question was known in antiquity, and preserved in the works of Sextus Empiricus, one of the minor ancient skeptics. His well-known texts influenced David Hume, who shook Western philosophy. I do not know how many times it has to surprise us before it will stay firmly in public memory. For now, repeatedly, justificationism requires its own justification and we rediscover that it cannot fulfill this requirement without circularity. By contrast, evidently, non-justificationism does not suffer from this obligation, yet justificationists require of its adherents to justify their position: they repeatedly require that the non-justificationists should do what they are not obliged to do, while they themselves have to do it and are not able to: they cannot deliver the same goods that they require from non-justificationists. The irony of the situation is just too unusual, and it may suffer some further glance—preferably ironical (William Bartley). What one can and should require from the non-justificationist is to tell us how they distinguish the rational from the irrational (Popper). Moreover, the non-justificationists can call their view a new theory of justification (Russell). So let us take it seriously.
When I tell my students that science cannot justify, by an appeal to experience or otherwise, they look at me most incredulously. They can scarcely believe that I do more than try to stimulate them, and perhaps merely try to pull their philosophically unsure legs. Do you think, they say, that when I slam the breaks of my car I have no right to expect it to come to a sudden halt? They are serious.
How revealing. It is in a very poor psychological taste, and so very unimaginative. They might just as well ask, do you think that when I slam the breaks of my car I have just as much right to expect that I shall next find myself on the moon? Still better, they may ask, do not expect to be here until the bell rings rather than find yourself on the moon within one minute? They do not. They are not bothered by my presence in the classroom (since they do expect the bell to ring on time and free them of my company); the possibility that they may disappear from mother-earth any moment has not occurred to them, partly because most of my students are ignorant of even elementary physics (even Galileo and Newton knew enough to discuss this very possibility); but the fallibility of their breaks does bother them, and with ample justification: analyze their question and you will see at once that they believe that the laws of inertia and of friction and of impact and of pain due to impact are all unfailing, but their cars’ breaks are failing. Inasmuch as experience justified our expectations, and it surely does in more than one important sense, experience justified just what their question indicates, yet they put their question in the manner that imply that experience reassures them that they may trust the breaks of their cars! Is that serious? Surely not.
Experience says, we have far too many avoidable road accidents; it says, some of these are due to failure of breaks. It is less the assurance that breaks are reliable than the wish to have them more reliable that stands behind the incredulous criticism from the use of breaks of the view that science offers no guarantee. Advertisements of all sorts repeatedly assure us that science guarantees what they recommend. What is this guarantee? It is no good saying it is all sham. We know that in advanced countries there are laws requiring truth in advertising and that these laws are at times applied; not sufficiently, we grumble, thereby agreeing that there is (or should be) some substance to the claim for science-based guarantees. What then is this guarantee? This is the problem of induction as applied to advertising.
Except that it is misapplied. That science guarantees is not in question; what philosophers question is the guarantee for it, the guarantee for the guarantee. We know that insurance companies insure, and we know that at times we are insured and deserve compensation yet the insurer fails to insure; at times it may even go bankrupt. A feeling prevailed the West early in the twentieth century that science had gone bankrupt.
The requirement in the United States to have good dual breaks in all cars on the road was legislated much later than in Europe. The American motorcar industry required it in order to compete with the European industry. This is insufficient. The manufacturing of motorcars should be subject to stricter safety regulations; they should make cars as safe as elevators, planes, and other potentially dangerous instruments. The industry will gladly put on all cars dual and even triple and quadruple breaks—their cost is negligible—if this would boost sales; but experience shows them that the less the public thinks about such matters is the better for their business. Think! It may hurt for a short while, but you will be surprised how little is the pain and how great the relief from the fears and ambivalences and confusions. Confusion hurts at least as much, only the confused is too confused to notice the fact and reconsider his preference of confusion over clear thinking.
Can one admit all this commonsense discussion and still deny that science is a guarantee? I do not know, but please note that the safety of our instrument involves competition, that it is the market mechanism, no less, that has increased the life expectation of citizens of the modern world. Hence, for science to provide any guarantee, it needs a guarantee not only that what physical science has to tell us about breaks is real but also, and no less so, that current scientific-technological system is stable, which includes the need for relatively stable free markets. Impressive.
It seems experience shows that at times a confused system is better than a thinking system—even for scientifically led societies (Asimov, Foundation Trilogy, 1942-53). Fortunately, on this point experience is misleading. A. P. Usher, the Harvard historian of technology, has shown (1929) that certain bottlenecks develop in technology by short-term preference. For instance, mechanical looms of simple kinds have the woven material placed vertically or horizontally. There are short-term preferences for a vertical loom that obscures the long-term advantage of the horizontal loom. That the same is obvious in economics, even in market research, hardly needs evidence. Except that the motor-industry is doing too well to have exploratory drives; and all too often the incentives for competition are insufficient.
So we all drive cars that are still unnecessarily risky (Ralph Nader, Unsafe at Any Speed, 1965), and instead of asking technology and legislation and social and economic planning to improve matters we rely on science not in concrete terms but in general—we rely on science as a faith-surrogate, and the name for that faith-surrogate is induction.
Proof: laws regulate not only truth-in-advertising but also tests. Laws regarding tests ought to be specific. The specifications of tests often have serious loopholes. These allow for disasters and disasters are incentives for the improvement of laws. The choice we have is between justification and efforts at improvement. The rational preference is for improvement. Except that there is no guarantee for improvement. We can only try. For this legislation may create great incentives (Daubert standard).
A famous philosophical adage says, “ought implies can”. It means, do not demand the impossible. This seems obvious, since in discussion of it philosophers ignore the fact that traditional western education conflicts with it: educators require the impossible in the hope to obtain the maximum. This hope has an obvious empirical refutation: demanding the maximum is putting pressure that causes much agony and creates antagonism. The adage in question is liberal, and liberalism obtains more by mobilizing the good will of the citizen. In particular, educators kill the love of learning systematically by demanding the impossible.
The adage does not go far enough. To comply with it one has to alter all commands by replace in them “do” with “try to do”. Which change is sensible. It is still better to replace “do” with “try to do but do not overdo your effort; always remember that your assets are limited” (Moshé Feldenkrais). This does not ensure improvement but we do not know better. The obstacle to it is the preference of so many people for justification over improvement: it is their faith in science. Although better than voodoo, faith in science is still unscientific, and in the same way.
If science does anything regarding faith, it is in the negative direction. For thousands of years people had an unquestioning faith that the sun will rise tomorrow; whatever else may happen, this old faithful is sure to rise tomorrow on the clock. Indeed, it was, and still is, one of the most relied on clocks we have. And then came science that murdered Apollo or Sol or whatever else you can call the old faithful, and fears arose; and then came the very highest of all high priests of science, Pierre Simon Marquis de Laplace, and proved that we may rely on the sun to rise tomorrow, not so much because the sun is faithful as because experience is. The chance that the sun will rise tomorrow is not absolute certainty, but as close to it as matters. Experience informs us about 3000 years of sunrise (Laplace knew very well that it did so long before, but he was speaking of quasi-reports); for 365 x 3000 days or so it did, and so the chance that it will do so tomorrow is 3000 x 365 divided by 3000 x 365 + 1, which number is closer to 1 than a thousand dollars is to a thousand dollars and one penny. Dear Laplace; he tried hard but failed. His inductive reasoning was questionable and his conclusion meager. The ideas on which he based his conclusions were developed largely by one of his immediate predecessors, an English mathematician by the name of Thomas Bayes, who preferred to leave them unpublished as long as he could, perhaps because he hesitated about his ideas. Laplace misapplied Bayes’ rule, as do all the philosophers who follow it to these days: the rule is mathematically valid, but only under conditions that do not hold for the forecast about tomorrow’s sunrise. Moreover, it is little comfort to know that the sun will rise tomorrow: will it rise next week? Next year? Laplace tackled this question too, more successfully he thought, but we do not share his appraisal. He used Newtonian mechanics—that rests on experience, of course—to prove the stability of the solar system. He assumed quite arbitrarily that nowhere in the universe is there an object sufficiently large and directed sufficiently towards the solar system so that when it arrives here it will throw everything apart. Laplace made other simplifying assumptions that by now is also passé. Moreover, Laplace was sure that Newton’s mechanics is exactly in accord with the facts; we know that it is not. Also, meanwhile the devilish thought has occurred to some scientists that the source of the sun’s enormous energy is nuclear in origin; from this it follows that there is a definite chance, no matter how small, that the sun will explode tomorrow, and this chance has hardly changed in the last three millennia, or will change in the next three, although it will one day diminish since if the sun does not explode it will one day implode or cool off and darken.
Of all these possibilities, each looks remote, except the one of outside interference that we have no means of properly measuring or estimating. Famous astronomer Fred Hoyle has adumbrated this fear in his science fiction novel The Black Cloud. The Black Cloud is a foreign intruder. The cloud turns out, however, to be intelligent and possess other human qualities. At the end of the novel, the cloud saves humanity from the catastrophe that its presence as a foreign intruder might easily cause. This novel is most revealing of current scientific atmosphere, of the snobbery of the scientific society and its claim to be near ideal but for the unscientific politicians, in its faith in induction and its powers. Strangely, the Black Cloud’s own psychology is quite interesting, and for those still familiar with the Old Testament it will resemble an old acquaintance; perhaps he is quite intentionally Jehovah of old, the way he appeared before philosophy and scholasticism mellowed His character and magnified His strength into omnipotent. Scratch the inductivist and you will find the True Believer.
To return to the point, we do have expectations, concerning cars, voyages to the moon, and more. The existence of such expectation is a fact; we repeatedly act on them. Are they rational? The answer to this question depends on our theory of rationality. Can we prove them (rational = provable)? Certainly not: we all have disappointed expectations. Can we prove them likely (rational = probable)? Not even that; even our very survival does not prove that we are wiser than our childhood and adolescence friends who did not live to match their wits with us today. Can we prove them more reasonable than their precursors (rationality = improvement)? Perhaps; at least we have eschewed many errors of our predecessors. Are we sure that rational = provably improved? I think not.
What about justification of beliefs? Is one’s rationality a justification of one’s beliefs? That depends on our theory of justification. Why do we have to justify our convictions? Because, you may say, we act on them. Are we socially and legally justified in acting always on our convictions? Certainly not; for instance, we must not condemn people except in courts, whatever our personal conviction about them may be. Are we morally, at least, justified in acting on our convictions? Sometimes yes, of course; sometimes we are even bound to do so. But often not: we are morally bound to give people the benefit of doubt, even to give the opposite opinion the chance; we are morally bound to undergo experiences we know will not solve problems just so as to prove our sincerity and let others join us; this is the theme of many a popular movie—but philosophers do not go to the cinema, except in order to escape from philosophy. Questions and problems abound, and the theory of induction is of no avail: we must take each on its own merit and examine it as best we can. This is by far not enough. Yet, sadly, it is all we have.
I see that you disagree. You suggest that faith in scientific theory is better justified than faith in superstition. You are right: superstition is pointless and often silly. Philosophers do not why do we prefer Einstein to superstition; they ask, why do I choose to believe Einstein?
This is hilarious: most of those who ask this question hardly know what Einstein has said: one cannot know what he said without more command of mathematics than most philosophers possess. Obviously, if you cannot know what he said, you cannot believe that it is true. Admittedly, it is less belief and more the disposition to do so: you are rightly more disposed to believe a certified physician than a witch doctor. Why? Because certified physicians are less prone to err, and their errors are less likely to be due to gross ignorance and negligence. If you are going to be an academic in the medical branch of Academe, you should know this: the better the education of physicians, the better they function. No assurance of avoidance of all error is reasonable. Most of the physicians who treat us have acquired the defects of the medical schools that they have frequented: they suffer from the tyranny of details.
The theory of induction, the theory that demand submission to the tyranny of minor, pointless details, is the opposite of the theory of the critical attitude. The details of scholarship and the minute experiments reported in the vast and ever-growing literature may indeed be of great intellectual significance. Sometimes they are, though ever so much less than inductivists claim. Moreover, the details to which individuals may devote a great part of their lives may be insignificant yet highly satisfying, be these studies of birds, butterflies, mushrooms or postage stamps. In old university libraries, you can find copies of old books with most enchanting and highly scholarly detailed marginal notes, some of these got published later, some not. The authors of these notes may have had interest only in the details they were commenting on, or they wrote their notes for mere mnemonic reasons; this does not matter. In these days of publication pressure, a number of publication of such notes appear. Yet they are obviously compilations that are outcomes of sheer labor of love, and a very moving one at that. The love for the detail, then, is as pure and touching as any other preference. It is the tyranny of the detail that I am here warning you against. Details may signify and you may want them even if not. Otherwise, try to ignore them; forget them pronto.
Details complied voluntarily show some consistency of taste—defying any generalization, perhaps, but noticeable nonetheless—as in any painting by Uccello or Breughel or Bosch or even (if you are imaginative enough) by Malevich or Braque or Jackson Pollock. Details compiled under the tyranny of pretentious academic—scientific or scholarly—discipline is unpalatable; especially to students cramming for exams; they comprise the paradigm of pseudo-scholarship. (The cramming of details for university exams sickened young Albert Einstein to the extent that he left science for a year or two.)
The tyranny of the detail derives from the theory that science gains its authority from the submission to the detail. The techniques of the tyranny of the detail is the suppression of the bold ideas by the requirement that their authors document them profusely, by the requirement that one should not critically discuss bold ideas, with reference to facts, new or old, or with other means; rather, facts should have priority. Admittedly, most of the details present in most introductory texts are very significant; their significance is not obvious, however, because details come too early: prior to the debates that have led to them. What a shame! This omission makes exciting study terribly boring and oppressive.
Philosophers of science are familiar with the argument known as the paradox of the ravens or the Hempel paradox. He overlooked the scientific convention that considers only generalizations and ignores singular facts unless generalized; he could thus take for granted that “all ravens are black” gains its authority from the observation of instance of it—from the specific blackness of specific ravens—as if it contains no conventional element. He then argued that all observations other than ones of non-black ravens strengthen “all ravens are black” since it conveys the same information as “all non-blacks are non-ravens” that has many more instances—such as a white shoe. Rather than conclude that instances to a generalization do not strengthen it—whatever strengthening is—he admitted the odd conclusion of his argument. Everything, he concluded, is relevant to everything. This is indigestible; it invites discussion of the strengthening a statement.
Inductivism was once a great unifying synthesis—a few centuries ago, that is. It was the dream of a brave new scientific world, the dream of Sir Francis Bacon as he expressed it in his (unfinished?) posthumous New Atlantis (1627). It was partly a myth partly a theory of science proper and thus appealed to both the medieval and the modern in the middle period of great transition. The way to knowledge is simple and easy; it is not worth it to stick to dogma in the fact of facts. Worship Mother Nature by paying attention to Her smallest features rather than put her in the chains of your preconceived opinion, said Bacon, and She will voluntarily show you Her charms; you will achieve thus more than you could ever dream of achieving by enslaving Her.
The New Atlantis is the home of a research institute—Solomon’s House—which is a purely secular research institute (though its members pray to God to help them in their research); the daily routine there are collecting facts and deducing from them theories and applying them in all sorts of manners. It is not surprising, then, that in the New Atlantis the investigators have a high social status: while Bacon stays there on his visit, the president of Solomon’s House comes to town, and in a procession. The story indicates the nature of the procession: Bacon’s host has to book seats in advance for them to be able to watch the procession. Bacon even meets the president who tells him all about the college. Among other things, he tells him that the inventions of the members of the college fall into three categories: some are in the public domain; some are dangerous and the college divulges them only to the government that keeps them as state secrets; and some are too dangerous even for that.
The case of J. Robert Oppenheimer shows how daring Bacon’s vision of inductive science and its place in society is even for our own day and age of technocracy. (Oppenheimer suffered penalty because he refused to help the state search for the hydrogen bomb.) Amazingly, Bacon developed his daring vision soon after the Inquisition burnt Giordano Bruno at the stake for his idea of the infinity of the universe and shortly before it tried Galileo for his display of intellectual independence! These events symbolize the power of the Baconian synthesis. Here is on the one hand the last attempt to keep natural science under control and on the other hand the demand to start afresh and appeal only to solid facts of nature, however minute yet utterly reliable. We should not fail to appreciate inductivism as it has helped implement this change. At least, most western thinkers had faith in it for over two centuries.
The great modern synthesis was the great idea that researchers must bow to the smallest factual detail and prefer it to any bold synthesis for fear of dogmatism. It was bold and progressive at first but became cowardly and reactionary after science won its freedom and after inductivism underwent total destruction due to ample valid criticism. Another great synthesis appeared later: Marx’s determinist theory that denies that an idea may lie at the root of any social problem. Also false, it was also bold and progressive to become highly reactionary. It is nowadays a major obstacle to progress as its diagnosis of all the causes of academic maladies as material, whereas most of them are (not material but) intellectual, including the long persistence of the Baconian synthesis, justified by its past success but unjustified upon further critical scrutiny. Some of my best colleagues believe in both these syntheses, the Baconian and the Marxian. They are thus doubly rather cowardly and somewhat backward; that they can still be serious academics and even make positive contribution to the stock of human knowledge is the miracle of the irrepressible spirit of inquiry: scientists are opportunists, said Einstein.
I hope you liked the previous paragraph. I wish you had read it twice. I even recommend that you now go over the whole of this section again, and see it in its relation to the last paragraph. Then, allow me to further recommend, look around and try to see how much it helps make sense of what is going on in the commonwealth of learning around you. I hope it proves somewhat helpful. You may also find the present paragraph rather distasteful in that it does not suffer from excess humility. Perhaps; but never mind: the question is not how commendable or condemnable you may judge my character on the evidence provided in these pages; rather it is, how much you can improve your diagnostic abilities—with or without the aid of these pages. They do not contain the whole of my etiology, but a major ingredient in it; if this is not much helpful, perhaps you should look for help in some other direction. Conceited as I may be, I shall not take it personally and I shall not resent it: I only hope your switch, if you do decide to switch, stems from the independence of your spirit. This, after all, is what matters most—for you but also for your environment.
Everybody speaks approvingly of academic freedom. So we should. The question is, how much the support for academic freedom squares with social determinism and with the preaching for slavery to (experimental and scholarly) details in the name of science. There are other threats to academic freedom. Thus far, I have hardly mentioned the damage that nuclear armament has caused the cherished academic freedom.
I confess I am of two minds here. My dislike for hot air makes me wish to protest against all those who speak on academic freedom with confidence, pretending to know what it is. Yet this is not the worst about it. Suffice it that when we see one we know it: cases of infringement of academic freedom are scarce, as they should be, yet departments heads—who often generate or support such cases—are experts in hiding them: nothing is easier than to convince a victim of academic freedom to settle for a quiet compromise, and it is always possible to grant such a victim leave with pay for a semester or two before appointment is terminated. (I have myself settled for such a compromise. It is quite tolerable if you have an alternative job around the corner, but a bitter decision problem otherwise.)
Clearly, the problem is usually not very pressing. If anyone feels strongly about it, one can study it. Yet one case is significant and can hardly stand postponement: the freedom to act politically as an academic. I will not discuss it here.
The worst of this part is over. I intend to keep the remaining sections of this part as brief as I possibly can. I am itching to come to the prescriptive part of this book.
Intellectual excitement, the listening to the music of the spheres, to repeat, is a rare phenomenon. So are many other of the higher things in life, such as artistic pleasures, or friendship, not to mention true love. Cohabitation, come to think of it, as the attempt at genuine partnerships between two individuals, is almost unknown before the twentieth-century and outside the western or westernized culture. All culture is delicate; it needs cultivation. How, then, is it possible?
The answer must lie in the differentiation of kinds of education and training; one kind of training may be beneficial, another harmful. Moreover, the beneficial training may rest on false views and vice versa; just to complicate things hopelessly. Still worse, one kind of view may ensure ill success in the strange manner of overshooting its target; this is a prevalent error.
One of the characteristics of medieval romance is its emphasis on love—to the point that the word “romance” has changed its meaning to signify any love-story from a more specific kind of love, a short affair between two people not married to each other. And yet, as all students of medieval romance repeatedly emphasize, the achievement of one’s loftiest ideals was self-defeating; a woman who offers her charms to a man thereby demeans herself, and so the perfect love is unconsumed to the last—even impossible to consume, deliciously hopeless from its very start (Sir Lancelot’s to Queen Guinevere); the perfect object of love is quite usually an unattainable princess in the impenetrable castle; still better, she should be (and often is) an image seen from far away (as Beatrice appeared to Dante Alighieri, as she served the inspiration for his Vita Nuova, 1295) if not a fairy proper, a sheer mirage (King Arthur’s half-sister Morgan le Fay, the Lady of the Lake, who is but the morning mist on the lake, a mirage). An interesting expression of such ethereal love in modern literature is Vladimir Nabokov’s Pale Fire of 1962, where a homosexual explains under what condition he may feel attraction for a woman: it is possible only in cases that cannot possibly involve its consummation. It is enlightening that this idea, that Freud explained as an expression of ambivalence, appeared this way in fiction—independently implicitly in Anthony Hope’s 1894 The Dolly Dialogues and his 1894 The Prisoner of Zenda, and explicitly in Edmond Rostand’s 1897 Cyrano de Bergerac.Freud was right on this: the escapist character of romance is in its being an attempt to ignore ambivalence. The pressure of ambivalence produces effort, and the increase of effort leads to further increase of effort. The sum-total of the exploit is lack of satisfaction resolving itself in sheer frustrated tiredness. Indeed, the medieval romance soon becomes a real bore. Even Heinrich Zimmer, who, in his beautiful The King and the Corpse (1956) succeeds to bring the Arthurian romance back to life, partly with the aid of drastic abbreviation and sensitive selection, partly with the aid of contemporary psychological (Jungian) interpretation of symbolism, partly by the use of his own considerable talent, even Zimmer admits that much: “To be driven everlastingly around the world on adventures that never end,” he says, “is finally a monotony as narrow and confining as the magic circle under the flowering thorn. Ulysses wearies at last of all the monsters he has conquered, the difficulties mastered, the Circes and Calypsos at whose side he has slept his soul away … and he longs for the less eventful … things of everyday … his house, his aging wife.” Not so: it is in escape from his house and his aging wife that the adventurer looks for romance in the first place. Perhaps Ulysses was forced into adventure; Peer Gynt was not; and he returns to his aging Solveig at the end of this pompous, symbolic, and typically nineteenth century drama of Ibsen, in a kind of resignation of the tired; he has learnt his moral, feeling forlorn nonetheless—like Gilgamesh.
Zimmer’s idea of what satisfies in legends of this kind seems convincing. Readers vacillate with the hero, wishing to avoid the adventurous pursuit of the charms of fairies and to escape from our boring, dull, and aging spouses. In a legend, says Zimmer, the hero somehow transforms an everyday character, everyday social functions, maturing an everyday person through some legendary adventure. As the reader identifies with the hero—they are both males, obviously—he gains symbolic gratification through a mock-transformation of his own self. (This may help him accept his present dull life, or perhaps even help him transform.)
Strange: legends do speak of character and of social role, but they stress love, yet Zimmer centers only on character and social function, not on any emotion. Sadly, the reason is rather obvious: the male hero and the male reader can do something to transform their own person, individual or social, in truth or in mere fantasy; but they do not know how to transform their aging wives, how to unite Penelope or Solveig with Calypso or with the Lady of the Lake. Why not? Why are all devoid of all transformation even when, like Queen Guinevere, they may partake in an adventure?
Here comes my startling point that may have some significance. At the bottom of the idealized view that is false in its over-ambition there is a false view that is the very opposite. Thus, the adventure is not merely an escape from ambivalence; it is also a poetic-symbolic expression of it and thus it is even an enhancement of it—of the ambivalence—in the pretense of resolving it. This is a universal truth; already Freud has adumbrated it: escape from ambivalence is at the same time an expression of it in its very being quite characteristically and inherently escapist. Colin Wilson, The Outsider of 1956 discusses this escapism in excruciatingly detail. Great art historian E. H. Gombrich observes in his learned Meditations on a Hobby Horse of 1963: plans germinate in dreams. Not all dreams crystallize into plans, however, and not all plans succeed. Moreover, some dreams come with inbuilt safety devices against their ever coming any close to becoming plans: these are the true escape dreams; and exaggeration is the very simplest of such safety devices (superheroes). The very simplest mode of exaggeration is polarization, the describing of things in black and in white terms, even the very same thing—the object of ambivalence—in polarity. Sex, as the prime object of ambivalence, has for its ambivalent symbol the dull, aging wife; also the scintillating fairy: in our society that discriminates women against its own ethos, women are objects of unbounded admiration and thereby of suppressed contempt. The asexuality of the coveted fairy is identical with the viewing of the wife as contemptible, at the same time, as it is impossible ever to plan to marry the fairy.
The philosophical counterpart to all this came historically out in a pathetic story of the life of a great and very compassionate philosopher, John Stuart Mill. He had a severe attack of depression in his early manhood. The attack precipitated as he faced the question, shall I be happy if I ever achieve my loftiest goals? For, his answer was, decidedly not. He emerged from the depression months later when he read a moving autobiography in which the hero narrates the death of his father and his own subsequent undertaking the role of the head of the family at a tender age. When Mill read this, he wept; there and then, he discovered his ability for compassion, and knew that he was on his way to recovery. Mill’s story is open to a psychoanalytic interpretation: his aims were imparted to him by an exacting father; father killing relates to both his illness and its cure. All this is true, yet to rest the matter there is to miss an opportunity for contemplation.
Mill showed admirable traits: boldness, ruthless honesty with himself, and monumental frankness with his reader. The hallmark of wisdom is to stop now and then and ask whether what is subject to pursuit is worth the effort of the pursuit. Yet, all the same, there is an error in Mill’s wording: if I achieve my loftiest goals, shall I be happy? This question all knight-errands should have asked themselves—with the result that they would have stopped their adventures and their quests and all. For most of us it is a very difficult question, since our loftiest goals, inasmuch as we are aware of them, are not pertinent to our happiness directly: most people enjoy the game at least as much as they would enjoy its happy ending that they can envisage only dimly. This, indeed, is Mill’s conclusion that kept his depression at bay: it is not so much the goal as its pursuit.
I protest: this adage is true if and only if the end is worth the pursuit. Mill was obsessed with ends: unless he found assurance that they merit pursuit, he could not enjoy them. He could not enjoy anything ephemeral. He enjoyed music; in his depression, he had to destroy his pleasure in it. He reasoned thus. Since the number of available tones is finite, and since the number of combinations of them in any melody is also finite, the number of possible melodies is finite and knowable; sooner or later they would wear off. This is highly questionable, but that does not matter overmuch: we can replace music with chess, since Mill’s reasoning is valid for it. Why does it matter that in principle the pleasure of playing chess is limited? Meanwhile you can enjoy the new melodies and the new chess strategies and gambits that come your way. Not so to poor John Stuart Mill in his depression; ends and means were worlds apart for him, and happiness he first related only to ends and then only to means! He was in error both times.
Happiness, Mill concluded, is not the end of any activity; it is a woeful by-product—like the pleasant hum of a properly functioning engine. Apply this for an exciting moment and you can see the fundamental error of the quest of the knight-errands: the mere achievement of getting together was for them an end, whether at home or in the far castle. It was for them not a mode of living, not incorporated in happy and robust daily life; it was for them something too lofty to be real.
We are now prepared to study the love of learning, to criticize the lofty standards imposed on students and scholars alike which spoil the fun of the activity; and if I am any near to being right, I should be able to point out to you the ambivalence concerning learning and the hatred for normal healthy learning implicit in the high standards that condemn normal healthy learning as qualitatively and quantitatively much too inferior, and that the vain promise of the resolution of the ambivalence towards it by replacing the pleasures of learning by quests, by endless boring excruciatingly dull ordeals and trials and exams which inherently lead nowhere near the goal—these are but neuroses: of bookworms and of minute experimenters.
Let me admit: quite possibly, at times dull boring studies with no redeeming value may be quite unavoidable: intellectual life too is no utopia. My advice to you still stands: avoid them when you can. Avoid any instance of them unless it comes as a clearly limited task, and as one that circumstances forcefully impose on you!
Bores appear in all sub-societies. Bores need not concern us, least of all academic bores whose company is much less of a burden on their company than the talkative bores one meets in the proverbial clubs and pubs and stadiums. Admittedly, from time to time one has to listen to lectures delivered by bores, especially as a student. Ordinary academic bores are usually shy and kindly individuals, who are always glad to relieve you of any burden related to their own company. No, I am not speaking of these ordinary academic bores, but of ideological academic bores. They are the crusaders of boredom; they allege repeatedly that boredom is an essential ingredient in learning. It is definitely not.
Consider boring entertainment. Boring literature and poetry and movies exist too, and in abundance. What is surprising is that the high romance itself resolves itself in sheer boredom. This boredom, you remember, unlike the boredom of a novel written by the untalented or by the unimaginative, avails itself of some more interesting explanation. Similarly, it is not the untalented or the unimaginative or the unproductive academic bore that is intriguing us here, but the ideological academic bore, who is a pest and a puzzle combined.
The neurotic symptoms of the ideological academic bore are substantially different from the reaction patterns, neurotic or otherwise, of the ordinary academic bore. The differential diagnosis is extremely simple: put in the hands of bores books or articles that are obviously exciting and that pertain in some measure to their own fields of study, and see how they react. The ordinary academic bores react in some ordinary fashion. They may say, just at the moment they are too busy to read the material you recommend. In such cases they may or may not express neurotic anxiety at this limitation; they may or may not put the reference in question on their reading-lists for a propitious moment, or even purchase the texts in question and put them on their special reading shelves—we all have such shelves, of which use and abuse I shall speak later. Ordinary academic bores may even read exciting books, or at least attempt to read them, and find them uninteresting or mildly interesting but not very exciting. If they find exciting texts, then obviously they are not unredeemable bores. Feed them regularly with such exciting literature and they may blossom.
Ideological academic bores react very differently. They will brush off a suggestion that a text is of value. They will do so impatiently and even unjustly. If they cannot do so, either because the texts come with recommendations of prominent persons or of many colleagues, then they will be rather obsessive about acquiring and reading them at the first opportunity. The first opportunity may come only years later, but not for any other reason than that our ideological academic bores are genuinely busy with innumerable commitments—all of them a trifle dull, of course. It is known that ideological academic bores who work overtime will still increase the load of their dull work because they want the pressure they undertake when the agree to read the exciting text in question. One way or another, ideological academic bores will find the time to read. Sometimes they will simply have to do it out of ideological commitment: for example, they will undertake to review some dozen books a year—a full-time job for an ordinary academic with some sense of fun—only to discover the books in question to be substantially relevant to one of the books they have undertaken to review. When ideological academic bores read exciting texts they explode with indignation. That is the hallmark of ideological academic bores: they fear and hate (intellectual) excitement.
The specific arguments of ideological academic bores against an exciting work are uninteresting: one can usually grant the validity of the claims that bores amass yet fail to see how they establish their alleged corollary, their condemnations of interesting texts; one may fail to see the reasons for the fury of ideological academic bores. Sometimes, bores offer obviously unfair criticism; sometimes, however rarely, an exciting work conforms to exceptionally high scholarly standards, and yet our angry bores will employ even much higher scholarly standards than they usually employ merely in order to be able to condemn the exciting text as unscholarly.
Ideological academic bores fear intellectual excitement; they often declare it unfair that intellectual excitement should be available to readers who have not put their necks to the yoke; it allures as a real prize, but they are ambivalent about it; they therefore set very high standards in the pursuit of the noblest and greatest intellectual excitement of them all. They develop the theory that true scholarship is very boring and yet worth undertaking since the results of genuine scholarship are most exciting and truly wonderful. An instance of such philosophy, and the most important one at least in the scientific fields, is one that I have referred to in the previous section, namely the philosophy of Sir Francis Bacon, in which both boredom of the means and excitement of the ends are separate and separately prominent. No methodologist has ever contrasted ends and means in the pursuit of knowledge so much as the leading, great pseudo-scholar, Sir Francis Bacon.
The theory of the inevitable boredom of learning is most obvious and most pernicious in those trends of modern education that develop techniques of alleviating boredom. The very idea behind these techniques is the salutary recognition of the existence of boredom in classrooms and of the psychological fact that boredom is painful; this combined with the defeatist opinion that boredom is unavoidable; we can only sugarcoat it. Dr. Maria Montessori could never imagine that learning is a game, and a most enjoyable one at that. What she noticed was that pupils would not avoid games in classroom, and play them under the table when forbidden, unless we put on them enormous, damaging pressure. She preferred to permit games openly and interlace study and games—never unify them. Even Russell failed in his educational experiment to attempt to do this, though he realized, at least, that sugarcoating a bitter peel is not good enough, since one has to learn that throughout life we have to swallow bitter pills with no one to sugarcoat them. Does education, especially the intellectual part of it, have to be a bitter pill? Is scholarship and beauty and other refinements of life, also not the very sugarcoating of life itself?
The teacher training institutions render it easier for teachers and educators to bore than to excite. This is open to improvement. Nevertheless, it remains impossible to meet the challenge of turning each and every meeting of a class into an intellectual experience. Yet failure to achieve utter success is very distant from utter failure. The view of modern educationists is that of education as necessarily almost utter failure (since kids are so unable to concentrate!), they say. This is so obviously false that I have to use the prevalence of Bacon’s methodology and similar ideas as well as some psychology to explain the facts: only few educationists recognize boredom as an obstacle while opposing the educational techniques of sugarcoating (as not good enough).
Without the ideology of boredom, without boredom as pseudo-scholarship, much can be studied in different lights; we may reduce the hours of instruction so as to make it more possible to make each instruction period successful; we may try to reduce all compulsory education and tuition (alas, not compulsory school-attendance) as much as possible and explain to our students why we cannot reduce them any further. (This alone may render the unexciting exciting—just as experiments in industrial psychology may succeed as workers may have their boredom temporarily alleviated and consequently their productivity increased.)
Boredom is most painful on the conveyor-belt (Metropolis; A Nous la Liberté; Modern Times). This led to the development of a new field of pure and applied research, towards the improvement of the quality of working life. Despite dark forecasts, it has done wonders in the hands of people who took it seriously. In Sweden it terminated the conveyor-belt altogether. It took time before some researchers in this new field noticed that what Thomas S. Kuhn has christened normal science is the boring standard textbook accepted on the authority of the scientific community and the boring research that it approves of. It becomes obvious then that contrary to Kuhn’s counsel, the authoritative view of science invites upgrading too: boring research is intellectually pointless (although it may be valuable educationally or technologically). In view of all this, it is amazing how little thinking has gone into the study of the quality of working life in the education system in general and in Academe in particular. Dullness abounds there, and not so long ago the favorable view of it was almost universal. Things have changed now: the movement for the improvement of the quality of working life has shown empirically and repeatedly that at least in the developed (capital-intensive) industries of most developed countries dullness reduces productivity and that the reduction of dullness by adding background-music and entertainment is insufficient. The movement demanded improvements such as the rendering of jobs less specialized and more challenging. Under the influence of that movement, work became more interesting across the board in a few countries. The total result is still rather disappointing, to be sure, and far too many skilled workers—to say nothing of the unskilled—are still needlessly bored. Yet, as automata took over successfully much dull work, you would expect workers to have more leisure. Not so. Bertrand Russell observed that this requires much shorter worktime and that this requires education for leisure. This may boost the do-it-yourself industry that workers may find interesting enough. By contrast, do-it-yourself intellectual work requires no support from industry. It is the very familiar, well-established amateur research—scholarly or scientific.
Non-mathematicians hate mathematics, non-historians hate history. This hatred cannot be natural: it must be the result of prolonged inculcation, obviously by the education system. High-school teachers often are disgruntled people who failed to enter Academe or to master their trades well enough to enter some non-academic brain market. Some teachers enjoy teaching to the extent of doing so in spite of the low salary and social status incurred; they are the exciting teachers. Naturally, exciting mathematics teachers tend to teach budding mathematicians and dislike other pupils as distraction. Exciting history teacher tends to teach budding historians and dislike other pupils as distraction. Such teachers teach non-specialists. Why do they have to teach mathematics to non-mathematicians? It is dull, it pains them, it may jeopardize their careers before it even started; they will forget it all anyway; they will never have an occasion to use it; they only learn to hate it, to be blocked against it. Why do we torture them so?
My protest is not against the torture. I have already agreed that some torture is necessary in industry and I must admit at least the possibility that now and then it is also necessary in education. My protest is at the inefficiency of it all, at the thoughtlessness of it all. Ask industrial managers and industrial psychologists about the boredom inflicted on employees and about ways and means to minimize it, and you will get tolerably reasonable replies, though seldom very good ones. Ask teachers and educationists of any kind the same question, and you will not fail to notice the abject poverty of their reply; they will even confess that they find the question hard to answer because the true answer seems to them obvious. It is admittedly hard to answer a question when received opinion takes for granted the answer that seldom undergoes any critical scrutiny whatsoever. My protest is against taking for granted a silly answer to a question that invites teachers and educationists of any kind to deliberate on it.
One vexing task that boring editors impose on writers is excessive documentation. Much of the documentation that regularly occupies lengthy erudite footnotes is entirely redundant; scholarly readers have no need for it and others will not even notice it. Advanced students will naturally prefer selective bibliographies. Regrettably, however, most bibliographies are neither comprehensive nor selective. They are thus useless. Vague references to well-known works are often more helpful than exact references, since a well-known work is published in many editions and it is a real burden for those who wish to use them to find the edition that the writer has used: if the writer refers to a page-number, critical readers can only use it if they have before them the same edition as the writer or else an electronic one, in which case there is need not to precise references but to key words.
The only reason for the excess of references is that their function is not clear. Their function is to serve the critical reader or the reader who wishes to pursue the matter further in any other way. They function, alternatively, as proof that the writer has done the duty and looked up enough dusty books and periodicals to justify a contribution to human knowledge. I have tried a few times to follow this idea but editors blocked me. Perhaps no one will read the contribution to human knowledge that they edit but …
Yes. Of course, you are right. Readers who can use the internet need much less references as there they can find them with ease and more usefully. When will the conventions of reference change to accommodate for the internet?
This requires a clarification of the need references. Junior and senior scholars need them for different reasons; they should follow different conventions. References to evidence for texts that readers may question are different again. So is reference to texts that authors praise and hope to draw the attention of readers. All this invites international authorities to examine and cater for it and render it more efficient. This demands the examination and reform of the publication pressure that imposes inefficiency on the commonwealth of learning.
Concerning intellectual excitement, the philosophy of science that Sir Karl Popper has developed and that I employ repeatedly in the present volume is the opposite of that of Sir Francis Bacon. For Bacon, ends and means are worlds apart and the excitement is the ends alone. Criticism is utterly a lowly work, like the cleaning grounds. Scientific routine work is the patient collection of innumerable seemingly pointless minute data but the end is the true theory of the nature of things, says Bacon.
Popper still presents the truth as the end of science, but as remote, vaguely envisaged, perhaps never attainable. Each step of scientific development, he added, is a step towards the end, a new exciting theory or a new exciting general fact (a refutation). If it is valuable, it is exciting.
Ideological academic bores suffer inner conflicts: they fear and desire intellectual excitement. To resolve it they pretend that it is either chimerical or attainable only at the very end of a long, arduous trail of trials, deeds, and travails. This is self-defeating. Innumerable educational institutions, from elementary schools to institutes governing the publication of research results, they all inculcate and reinforce ambivalence towards the pleasures of the music of the spheres by imposing boredom in the name of high standards. To beat the system, try to enjoy studies: study only the enjoyable. This will benefit you and help you contribute to the advancement of learning. Yes, I too can offer an ideology and a slogan, not only the academic bore. Let those who wish to be bored get bored to their hearts’ content. You and I, let us try to have fun and enjoy our studies; every measly bit of it—like real gluttons.
I do not know where you could find a less prejudiced person in eighteenth and nineteenth century Europe than Immanuel Kant, and look at his view of women. Appalling. Intellectual women he found repellent; his ideal of femininity was soft, emotional, homemaker. Consequently, his ideal masculinity knew almost no emotion: the sense of respect is what he found behind moral conviction, behind friendship, behind anything common to all; never compassion. Penelope and Solveig suffer enough, but their mates suffer as much: neither knew reasonable daily-life-without-boredom. Pity.
Strange as this may sound to you, in this section, as elsewhere, I intend to discuss neither the phony academic, nor the coward. This is not my style. Try the following. Say in public, on any public occasion, that universities should be raise the level of its vigilance against charlatans, impostors, frauds, humbugs, windbags, buffoons, phonies, pseudo-scholars or pseudo-scientists. You will win applause. For my part, as a rule I find the rather obvious phony on campus innocuous, usually civil—even friendly—and often entertaining. (They cause harm only when they are administratively ambitious; high administrative aspirations push their owners to needless adventures, especially the demand for the reform of Academe by tightening rules against phonies. These may cause more harm than they prevent.) Similarly, it is all too easy to urge all educators to stress the importance of courage, civil and intellectual alike, to boost it and to promote it. When we praise the brave, we declare their courage beyond the call of duty. When educators violate this declaration, their more vociferous colleagues, administrators, and politicians commend their zeal. The terrible thing about hypocrisy is not its low level of morality, but its total lack of intelligence (Bernard Shaw, The Devil’s Disciple, 1897).
I have no intention to condemn cowardice. In the days when public opinion criticizes as brutal even the official condemnation of exceptional cowardice on the battlefield, it is out of all proportion to condemn the cowardice behind the phony streak of some ordinary scholars. Nor does the cowardice of these scholars impedes their scholarship—on the contrary, it may spur them to burn more midnight oil—usually to no avail, but not always so. Nor are all pseudo-scholars cowards; some of them are surprisingly brave by any standard (The Great Impostor).
I hope this suffices as warning to you. Please, do not get exasperated with me. As I have said in the opening of this volume, it is very hard to write a medical text without giving the impression that the whole world is sick, a text on psychopathology without insinuating that the whole world is crazy. I do not wish to generalize except on the relatively widespread damage due to certain bugs in the system. What bugs students most is just that cowardice that not only makes esteemed professors try a bit harder but, alas, also one that makes them force their students to try ever harder.
My concern here is not with cowardice, much less with the yellow streak; it is with the cowardice that we inculcate through our educational system under the guise of something commendable. There is no prohibition against cowardice, and one who finds the price to pays for it is not too high, there is nothing amiss in that cowardice. If one feels that the price is too high—and reading these pages may easily show that it is—one might try to become less cowardly, and even seek advice on this matter. Anyway, it is not your business. Your business, and it is your welfare that I have at heart, is to recognize the symptoms of cowardice and the means by which even some of your best professors may transmit it to you: I wish to immunize you against the training to be a coward: there is no need for it. In other words, I am mainly concerned with academic mental hygiene. I wish to help you master your choice, not to perform it for you. Please do not forget that I recommend the avoidance of all heroism, particularly intellectual. At least as a default option, it is objectionable.
Preaching heroism has done incalculable damage. Not only has it blinded us to the heroism of the unromantic hero such as the fighters against poverty and illness whose battlefields were old libraries, laboratories in basements or in attics, and their likes, or slum areas, political arenas, and their likes. This, in any case, is a small sacrifice, since real heroes are quite ready to forego the recognition that is due to them. The real damage of the preaching of heroism is that it is irrational. I remember very well how impressed I was, when in a discussion with a brave German on this topic, he mentioned his close relative who had been a dedicated anti-Nazi, yet sufficiently poisoned by its philosophical ancestry (namely romanticism) to do what his enemies most dearly wished him to do: rather than care for his safety and do his best to sabotage the enemy who was too strong to attack openly, my friend’s relative organized a street-demonstration and this way he enabled the beasts to murder their critics there and then in cold blood, as my friend had forewarned him. Once the leadership of the anti-Nazis sacrificed itself so heroically, the rest of the nation followed sheepishly. For, as Karl Popper has pointed out, nothing helped the Nazis so much as the division of humanity into natural leaders and natural followers, a division made by Plato and Aristotle and repeated by all the reactionaries from Hegel to Rudyard Kipling; nowhere was this division as popular as in pre-Nazi Germany. This popularity is what secured the position of the Nazis from 1934 onward. If one in ten German soldiers would have dared sabotage the operations at hand without undergoing obvious risk, millions of lives might have been saved; the repeated occurrence of such acts of bad faith, in itself, might have spread sufficient suspicion and distrust to cause the collapse of the regime itself (The Devil’s General). The German philosopher Karl Jaspers has come out with a group-confession of guilt—after the war—saying that more Germans should have demonstrated in the streets against the Nazis, since if more of them were heroes, they might have prevented the holocaust. Funny, he did not even notice that if more Germans were willing to demonstrate, demonstrations would require much less heroism. Even brutes like the Nazis could not murder millions of Germans in the early thirties.
The passage from Karl Jaspers that I have just referred to has been brought to my attention by Sir Karl Popper in one of the many private discussions I had with him on the topic of civil courage. Though for much of my view on the topic I am indebted to him, both in general and in detail, my application of these to academic standards runs in rebellion against his views and practices. Of course, he had the full right to apply his incredibly high standard to his own work. This was his own choice and no one had the right to comment on it except at his invitation. Some colleagues have expressed their disappointment at the fact that he had advertised many years ago a book that became available to the public decades later; this, of course, may well be a consequence of his exceptionally high standards; but then this, too, was his own affair. It is when he pushed his own standards a bit too hard on his close associates, such as I used to be, that the result was the formation of a close circle of devotees, to which I used to belong, with all the ill effects of schools, which form closed societies.
It is because of the ill effects of such semi-public high standards that I wish to discuss them. Unfortunately, the system imposes standards on students and young colleagues—rather irrationally and high-handedly. This calls for criticism and for analysis, intellectual and psychological but mainly social. All too often, heroically maintained high standards are standards of caution or prudence. Here, in our context, “caution” or “prudence” are merely synonyms of “cowardice”: we say of an action that it was cautious when we endorse it and cowardly when we oppose it. As I do not oppose and sometimes even recommend cowardly actions, perhaps I should be using the word “caution.” Yet I do feel that methodical or systematic caution is not to encourage although yes to tolerate.
We all wish to be more appreciative of great thinkers and artists among our contemporaries than our predecessors were of theirs. This is often very hard because great minds often bring into question our very standards of judgment, our very criteria of significance of thoughts and of works of art. Many thinkers and artists and public figures offer ideas that challenge our standards; few of these ideas turn out to be significant. At times, some of the others are cranks and phonies. It is hard to expose a clever phony, as Bernard Shaw has wisely observed. Therefore, he said, we are justified in exercising great caution when approaching the unusual works. If ever caution was justified, it is in such cases. The only objection I have to those who exercise it is their failure to recognize that their caution all too often prevents them from recognizing contemporary geniuses as they wish to.
Michael Faraday was a bold thinker whose bold ideas rendered quite obsolete many traditional debates—such as those in the search for forces acting at a distance in efforts to explain Ørsted’s and Ampère’s electrodynamic discoveries, or as the search for an aether. These debates continued for decades after his death, and quite uselessly.
Faraday as a Discoverer is the leading book on Faraday—by John Tyndall, his closest friend and only pupil. He wrote it shortly after Faraday’s death, and in an attempt to establish its hero’s greatness as an experimenter and to excuse his quaint ideas about fields of force as sheer personal idiosyncrasy rooted in his ignorance of mathematics. In his preface to the German edition of that work, the great—if not the greatest—German electrician of the period, Hermann von Helmholtz, repeated Tyndall’s suggestion; but since meanwhile Faraday’s ideas had gained some currency (especially thanks to Maxwell; Tyndall refers to Maxwell’s papers and stresses that they satisfy a high standard of mathematical rigor), and since even Helmholtz himself had made some contribution in that direction, his attitude was more tempered than Tyndall’s: he did not entirely and finally reject Faraday’s speculations about fields of force, but was ready to wait and see where the cat would jump. About a decade and a half later, he was invited to London to deliver a lecture in memory of Faraday, and there he claimed—in a thinly veiled language, to be sure—that he had always been a follower of Faraday (as far as conservation of energy was concerned). Helmholtz was a very great thinker, but I, for one, would have found it hard to become a friend of his. Still, I should not exaggerate: his conduct seems faultless when compared with that of American historian of science L. Pearce Williams, whose Michael Faraday: A Biography, 1965, presents Faraday as the grand old man of physics of his day.
You may ask whether the caution of Faraday’s contemporaries and immediate heirs led to the formation of a Faraday school. It did not. Faraday’s ideas were suppressed and early in his life he felt the suppression was possibly just because he followed Bacon’s philosophy—in its traditional version that declares all speculations harmful. Later on, he modified it to suggest that only some measure of suppression of speculation is just. One of his boldest ideas was that electricity can interact with gravitation. This idea is so bold that Einstein came to it independently only in the twentieth century. He first used it in his general relativity (the bending of light-rays) and more centrally in his very last theory. Despite great success and tremendous recognition, he too was isolated. Of the leaders of physical science of his time, only Schrödinger took up his ideas at the time—and Schrödinger was isolated too. Who says rank-and-file scientists are not cowards? You can explain much of their attitudes, actions, inactions, one way or another, and even their having put a quarantine on people like Einstein and Schrödinger are explicable—with much truth—as rooted in certain ideological convictions; nonetheless cowardly this quarantine still was. (This is Kuhn’s view.)
Towards the end of his career, Faraday had his experiments on electro-gravity published despite their showing no result. From the experimental viewpoint, they were rather poor. He published them partly to adorn a highly speculative, most interesting idea, to render a speculation kosher. Until that time, it was the custom to clothe speculative publications with thin experimental garb. Faraday’s last paper on the topic, however, was rejected by Stokes—a famous young scientist then—on the ground that the experimental part of the paper did not meet received high standards. The editors of Stokes’ correspondence felt a bit uneasy about his daring to reject a paper by Faraday but they, too, insisted that Stokes judgment is right: Faraday had failed to meet accepted standards. This is untrue: the expectation was that speculative papers should include experimental material, but that material did not have to meet high standards. Caution leads sometimes to the evasion of issues by means of the application of exceptionally high standards. Stokes was rather somewhat of a dogmatist than somewhat of a coward. Personally, I prefer to view this suggestion as false: after all Stokes was quite an important man of science, and when issues were imposed on him and when he could not dismiss them as being rather unscholarly presented, he was quite able and willing to face them. He could show courage when it was expected of him. Hence, when Stokes displayed cowardice, it was ideological.
Before elaborating on all this, let me wind up the story of the non-existence of a Faraday school. To begin with, there is almost no reference to fields of force in the literature for about two or three decades after Faraday had introduced them (1840). For instance, Faraday’s close friend, August de la Rive, in his three-volume Treatise on Electricity that was translated into English soon after it had appeared in French, contains many references to Faraday, all highly laudatory, but almost always to his experiments. Of the two references or so to fields, the first is with the pretense that Faraday had introduced the idea of fields as a purely mnemonic device. (The so-called right-hand rule concerning the direction of flow of electric currents and the directions of the electric and magnetic forces it generates, is a standard example: both Ampère and Lenz had devised mnemonic rules for it.) When Maxwell published his early, celebrated papers on Faraday’s lines of force, he was still convinced that Faraday’s fields might be reconciled with older ideas through a model of the aether. Before that, Faraday had delivered for many years a series of Christmas Lectures for children in London that were very successful and so at the time of Maxwell a new generation of physicist existed in London who were used to the idea of fields as a matter of course—not knowing that they were so very revolutionary. Thus, the idea of fields became current in the English literature, and it slowly spread into the continent under the influence of Helmholtz, Hertz, Boltzmann, and Poincaré. Maxwell himself got used to the idea that fields of force are so very revolutionary only towards the end of his brief life, and the same is true of Hertz. It took the young Einstein to argue: fields of force impose a deviation from Newtonian mechanics (as Faraday had claimed). For years, Einstein was supposed to have developed his ideas with the crucial aid of the experiment of Michelson and Morley. Yet, as Gerald Holton has shown, he had not known of that experiment. (He did not escape its indirect influence.)
This rather sketchy summary will do for now, I hope. Before Einstein’s success in replacing Newton’s mechanics, most physicists did not take seriously fields of force. This is an example of how easy it is to defend cowardice by reference to high standards: after Einstein, it took less courage to consider fields of force seriously than before, because Einstein had met the highest standards, because he has executed a task that his predecessors deemed impossible. We see, thus, that high standards may be a mark of inability to undertake challenging and risky tasks—risky, since prior to Einstein the importance of fields of force was much more in question. The imposition of high standards on oneself, however, is less of a cowardice; it is rather the submission to the cowardly atmosphere around one than the endorsement of it. It may be a bit unfair to refer to the case of Galileo here, because at his time the Inquisition demanded civic courage for any display of intellectual courage. Yet, with due caution we may consider his case too in the light of, but not following, Arthur Koestler’s bold though not very scholarly The Sleepwalkers. Galileo was a Copernican long before he would speak up publicly in defense of his conviction, for fear of ridicule and of persecution. He says as much in a letter to Kepler of 1597; his stupendous astronomical discoveries came in 1610 and his own publications in favor of Copernicus came soon after. After having made these discoveries, he felt he had met sufficiently high standards to warrant serious publication. The Jesuit astronomers of the Roman College supported him. Roberto Cardinal Bellarmine, his great admirer-opponent, the most reactionary, most powerful Cardinal, the ideologist of Rome then, and the individual responsible for the execution of Giordano Bruno, took him very seriously to threaten him with the same fate. Later on, a new Pope who was a friend of Galileo invited him to express his opinions freely. He later betrayed him, in order to rescue the Church from ridicule: he felt he had no choice. As Galileo was a faithful Catholic, he had no choice either; he was ready to admit defeat—personal, not intellectual— but only after having a proper debate with the inquisitors. It appears in a published version of the letter in which they asked the Pope for permission to allow Galileo to argue with them. He then exhibited exemplary civic and intellectual courage. This is a different story.
The situation concerning courage or its absence and concerning high standards, is still more complex, and is very well worth noticing in somewhat further detail—if you can bear with me a little longer. What I have said thus far admittedly covers, even amply, diverse cases of pseudo-scholarship. The inability to take Popper’s critical philosophy of science seriously and the preference over it for going on writing papers on the justification of induction in the traditional inductivist and positivist modes is an example. As is Stokes’ application—or rather misapplication—of high standards for the rejection of Faraday’s very last paper (yes; it is still unpublished. Excerpts from it, however, are in Williams’ life of him and in his published diaries.) I have not yet sufficiently discussed the act of succumbing to exceptionally high standards under pressure from a cowardly environment and from fear of ridicule of one’s (cowardly) colleagues. So let me take up one psychological point a bit further. I have nothing significant to say about academics who impose on themselves high standards: it is the right of every individual to endorse any standard, even to the effect that it will consequently prevent the publication of one’s most brilliant ideas. Unfortunately, however, such individuals often happen to be poor educators. While not censuring them in any way whatsoever, and while expressly refraining from dissuading you from choosing to be their student, I feel I should give you some warning against possible ill effects of such a choice, to help you prevent the damage they may cause you.
There is a relatively new theory of a somewhat Freudian stock but not orthodox Freudian that is rapidly gaining currency these days. I do not quite know it history; though I suppose it is very interesting, I do not wish to investigate it right now. It is a theory of emotional growth through differentiation. It assumes that many emotional distinctions, as well as their associated verbal distinctions, develop during adolescence and early adult life in a highly complex process of experiences plus verbal instruction needed to describe them. According to that theory, certain emotional retardation—often but not always viewed as neurotic—may lead to a lack of verbal distinctions, to the use of two words as synonyms, and vice versa: the lack of articulation inhibits emotional growth. Emotional growth, thus, is an involved educational process.
For individuals with no growth of, say, civil courage, the word ‘civil courage’ has little or no emotional content; you cannot say of then that they do or that they do not possess civil courage; the term, as philosophers say, does not apply to them: you cannot say that they are a borderline case between the brave and the coward, which is psychologically quite a different animal. Possibly, then, people with zero degree of courage respond to a challenge with neither courage nor cowardice, whereas people with no concept of courage at all will either totally fail to see the challenge or, while meeting the challenge they develop an attitude towards it—thus becoming brave or cowardly or in-between. Most important for our discourse, this reaction will often much depend on both the significance and magnitude of the challenge— unlike the settled middle-of-the-roader. If the challenge is both important and of manageable dimension, it is more likely that people with no concept of courage will develop their own courage. In addition, they may become cowards. Developing cowardice, please note carefully, may be a most unfortunate mishap, since even a brave person may reasonably respond in a cowardly fashion if the challenge is not significant yet demanding.
Empirical observation suggests that publication pressure generates publication blocks. These torment surprisingly many academics. One explanation for this is that mental blocks pass easily from professor to student; they are contagious. Their transmission mechanism is simple: blocked professors prevent students from trying to publish or submit dissertations. Of course, they do so in the attempt to raise their standards and with the result of depriving them of the opportunity to develop intellectual courage and of instilling in them obscure fears of rejection slips, of hostile reactions to manuscripts, and of other similarly harmless results. The professors who so impede students’ progress take it for granted that they are in the right. They are often devoted to their students and show great interest in their works, presumably with no personal motive and merely for the greater glory of learning. This is questionable. All that you can say in favor of high-standard professors is that they are likely to have ill effects only on students who have no intellectual courage to begin with or not enough inner resources to break away. This argument is faulty. Good students survive bad schooling, true enough, and poor students may be unable to make use of good schooling; so why reform our education anyhow? In truth, most students are neither so good as to be invulnerable to bad schooling nor so bad as to be impervious to good schooling. If you decide to join a school where publication blocks are inbuilt and some high-standard professors sustain them, I do not wish to dissuade you, but I hope that this warning should help you to immunize yourself from the disease. Let me report this: many of my papers received negative results when in manuscript and praise when printed in prestigious periodicals. When you seek advice of peers, try to choose those who have enough imagination to read a manuscript as a printed paper, who have a sense of proportion.
Not all who suffer publication blocks catch them from their teachers; but since some teachers are contagious this way, I had to warn you. To show you how one can develop cowardice through high standards without catching it from one’s teachers, however, let me take a different case of cowardice; not publication blocks, but something much worse: professors’ inability to defend their students against injustice within Academe.
Even in the case of such failure, we should not rush to condemn. Even a very brave professor may notice a case of injustice towards a student and yet do nothing about it. For instance, if the student is not officially under your care; if the student is a poor scholar who would do better outside an academic framework and who will, in all likelihood, leave college one way or another; if the injustice done to the student is not so very clear-cut that one can easily demonstrate it, or if it an injustice backed by a faulty standard that a considerable section of the university endorses; and if, on top of all this, fighting the injustice in question may lead to an enormous upheaval, say, because it has been committed by a very powerful yet sensitive colleague;—under such circumstances, I say, it is most reasonable to overlook the injustice. After all, the world is full of injustice, and we cannot fight efficiently all ills around us. It is almost inevitable then that professors who decide to forego some battles will soon come to question their own courage. If they are old fighters, self-doubt will not harm them and it may even lead them to some deeper self-examination that cannot but be all to the good. However, if they are neither cowards nor fighters—say, new appointees, young and inexperienced and unable as yet to shake off vestiges of academic mystique—it might indeed develop the cowards in them. More specifically, they may develop the cowardly technique of always examining students’ credentials and looking for fault in them prior to engaging in battles against what they consider unjust. Moreover, searching for faults in students’ credentials may invariably lead to finding them: students with very good credentials are immune to injustice since the unjust is usually a coward. Finding always faults in students’ credentials thus becomes a habit and slowly develops through suppressed guilt-feelings into an obsession. Misfortune alone may make young and inexperienced academics cowards who will slowly develop into stern guardians of high academic standards, even into national figures in the field of science education. It is easy in Academe to become absent-mindedly a coward par excellence and a top-notch pest to boot.
Let us not condemn such people. Their sincerity, their concern and suffering, the correctness of their standards, we need not question all these for one moment. It is easy to dismiss the insincere as to dismiss those who have no scholarly standards whatsoever and who, being phonies, pretend to possess whatever standard you represent. Phonies—whether intellectual or emotional—are both rare and rather innocuous; and attacking a phony of any kind whatsoever is cheap and useless. Following the ancient Talmudic scholars, I argue that the most correct standards turn vicious when applied stiffly. The very high degree of sincere concern, by hiding self-doubts, turn pernicious; that the very disinterested attitude of a defender of the standards, by becoming zealous and self-righteous, turns into self-perpetuation of much of the ills of the system. I would be ashamed to write this paragraph, as after all, it contains only the intended moral of most of Henry James’s stories and novels; but I am applying it to a case he (understandably) left out: Academe. My view is this. A happy-go-lucky academic who is somewhat of a phony is better than a tormented zealot whose standards are the purest. We must tolerate zealots; we may also note that they may produce bright ideas and display fine scholarship. What I am advocating here, as a kind of ideal, however, is the flexible, empathic intellect of the scholar. Kant, one of the most famous academics ever, was dogged following of his principles. He showed great courage this way and raised the standards of thinking. He was so strict as to declare wrong all lying, even to a killer. The best response to this is Bertrand Russell’s report: he once lied to foxhunters to save a fox and felt no compunction.
My suggestion in favor of flippancy frightens me a bit, as I have seen flippancy leading to disquieting results. I shall describe what I have seen, since it shows both that my idea is not so very new, and that it offers no guarantee of success. In certain centers of higher learning reputed for exceptionally high standards, one might well expect to find not so many geniuses who can be on top of their profession without much effort and who, knowing this, do not work hard but rather have fun. Such creatures are rare, and they land in obscure places at least as often as in the most celebrated institutes. Rather, in such institutes, hard work is standard. People there may work hard from a neurotic insecure need of admiration by all their colleagues everywhere as great scholars. They may work hard conscientiously. They may work hard because their administrators bully them to make regular sacrifices for the sake of their own reputation as scholars and teachers, for their reputations as fellows of their institutes, the reputations of their institutes (the so-called star-system). With so much hard work around you may expect some of it to be visible. Not at all. It takes place in great secrecy. Fellows there are on exhibition in the Faculty Club, where they pretend to sit leisurely and stir lazily a conservation on topics that they know apparently from the way they kill time; they pretend to be interested in frivolities, to have read the latest bestseller because they read for fun any old novel almost any hour of almost any day. They fill their huge armchairs to show you how much they are relaxed and enjoying themselves, stiff in their pretended relaxed posture, investing supreme efforts not to glance at their watches or at the grandfather-clock in the corner, not to look around to see which V.I.P. is noticing how relaxed and care-free they are. Breaking up a light conversation after coffee in a faculty-club of a distinguished institute of high learning is as hard as it is for penguins to start fishing. The penguins crowd around a hole in the ice, but afraid of the hostile water they refrain from starting fishing. They slowly close in, more and more crowded, until one of them falls in. If no enemy is present and the fall guy emerges from the water alive, then fishing starts furiously. At least penguins enjoy the catch. Academics with exceptionally high standards always think of a bigger fish they might have caught.
Nothing shows how pathetic all this is as the general excitement and apprehension accompanying the occasion of a rare visit of a distinguished person into the faculty club. Excitement, since this adds luster; apprehension, since the guest may talk shop. As if the topic two people in a faculty club choose to discuss at the coffee table signifies.
No; this is not specific to faculty clubs. Knock on the door of the office of a learned colleague and see that the book or manuscript that was on the table is covered, and, if the colleague consider you a person of consequence, you will meet a leisurely individual who will invite you to have a light-hearted chat.
Acting like the academics I have just described may incur suffering, but also remuneration. Whether my own attitude is preferable, I do not know. Colleagues with high standards have kept admonishing me—this, I have told you, is one of the facts that have led me to write the present volume—for being dangerous and inconsistent, for advocating high standards and low standards alternatively. (Highly-strung Thomas Kuhn has said this of me in print.) I do not wish to pretend that I have a cure-all: you may run into trouble no matter what you do. Still, there is always hope; and sometimes progress. Since high-standard-pseudo-scholarship rests on outmoded ideas that are increasingly rejected in the various fields of psychology, education, sociology, and social and political and moral philosophy, not to mention methodology—in psychology largely due to Freud; in education largely due to Freud, Dewey and Russell; in social and political philosophy largely due to Weber, Keynes, Hayek, and Popper; and in methodology due to Einstein and Popper—there is in the plea for this Academic reform more ground for hope than in other fields of social reform. When the reform is somewhat widely instituted, the pressure against the practicing of the standards advocated here will decrease. Life is, in ever so many respects, easier and pleasanter than it used to be; there is no reason to assume a priori that the academic reform I here advocated must fail or, still worse, lead to a universal hypocritical semblance of reform akin to ones already accepted in some leading institution of higher learning.
On the contrary, it is somewhat surprising that of all the reforms due to the rejection of too high standards, the one here advocated still resists implementation. Considering the autonomy and intelligence present in Academe, the surprise is considerable. The reason for my wish to elaborate on it is that academics back their exceptionally high standards by the erroneous philosophy of science that they have inherited from the Age of Reason: they still view science as conforming to the highest standards possible; some of them consider it a substitute religion; sometimes even against our better judgment. The execution of the reform advocated here concerning the various fields of academic education and performance amounts to the replacement of the vestiges of Bacon-style standards with Popper-style ones. For, in methodology it was Popper who has argued that what the Baconian tradition demands from researchers is impossible: namely, full success. On the contrary, he said, science is an adventure; scientists must dare in the hope to achieve. Popper has viewed his theory as a reflection of the scientific research institutions of encouraging both bold ideas and criticism. Judged as history of science, his view is over-simplified and thus quite mistaken. Not that the institutions of science reflect the Baconian attitude; they decidedly try and fail to do this. Though they reflect Popper’s philosophy more adequately, their effort to appear as Baconian makes it differ from his description—thus far. He is quite right in finding somewhere in the scientific research world some sort of encouragement for boldness, at least in that it remunerates certain bold people (Einstein) even in their own lifetime for their ideas that Popper has rightly declared bold. Still, the philosophy of science prevalent in the academic world is pseudo-Baconian, and many institutions reflect this prevalence—to the point that many researches in it discourage boldness and even expressly assert that Einstein’s ideas were not bold in the least. Paper is tolerant. Popper’s plea for boldness and for tolerance thus need balancing: we encourage boldness and demand tolerance! Never demand boldness and always insist on tolerance!
Academic education is these days largely a mass phenomenon rather than a service for some sort of elite or another. This magnifies the damage due to the prevalence of Bacon’s influence, and in two ways. First, great crowds of students come from families with practically no intellectual tradition or scholarly background; from homes without books, sometimes. They are thus prone to swallow the Baconian tradition more literally and less critically. And even if practically everywhere in the advanced world Bacon’s ideas are (hopefully) on their way out, the new Baconians are still zealous. Second, the competitive character of the economy intrudes Academe, both because in expanding it swallowed competitive sectors and because, in order to maintain itself in expansion and expand further, a university must compete. And so the Baconian system of rewards has now more kudos than ever—so as to impress the international press, national organizations, Senate sub-committees, and even members of the alumni associations, former members of fraternities, parent-teachers groups of all sorts, and other bodies that threaten Academe with their naïve endorsement of academic mythologies.
Consider a cowardly academic who as a professor or head of a department exercises sheer tyranny over some students; suppose half of them drop out, some of them into mental homes and skid row (a few kinds of mentally disturbed students are attracted to departments with high-powered tyrannical teachers), but most of the rest will do tolerably well, some will succeed, and a handful will receive national awards, one or two perhaps even international ones. You know that the tyrant in question and the department in question will count as big successes. Not only do successful students testify to the competence of their teachers; even the high rate of dropouts from a course show that the standards it maintains are high. This is inevitable, perhaps, but you and I know that a good teacher is one who helps students improve rather than the one who can pick the most talented students and assist one of them become a top dog!
A minor point. The leading institutions of higher learning have statistics to support their claim to be the very best. Except that in principle, all statistics that displays success and ignores failures is fake. Public relations offices of universities and lobbyists for them are not likely to admit that they are not perfect utopias. So do ignore their statistics: all statistics that come from that corner. Entirely. They come in good faith but fake they are all the same. Yes, you are right: this includes the CV of your humble servant: you do not expect it to include records of my failures, do you? Let me tell you this story. In 1959, Popper finally got his 1935 Magnum opus published. He then worked on a blurb for its cover. It features passages from old reviews. Among these was a thoroughly negative review by leading philosopher of science Hans Reichenbach. The publisher vetoed it. Pity.
And so to the moral of this section. Cowardice is not to blame. It is often the outcome of the demand for courage. This demand is often a cover for the cowardly postponement of confrontation with the Day of Judgment. Such postponement makes the very first confrontation of untrained novices a huge battle that all too easily they are likely to lose, consequently to which defeat they would readily learn to preach courage and other high standards to mask their own emerging cowardice and tendency to postpone all further confrontations. Worst, it is an inducement to replace all intellectual adventure with chimerical standards of excellence—with middle-of-the-road accepted standards of routine competence. Modern Academe fosters this inducement. Fortunately, the middle-of-the-road accepted standards today are much superior to those of a few generations ago, not to mention the medieval university. (The demand for perfection forced them to repeat received dogma as the safest thing to do.) No institutional frame is perfect: you may escape the training imposed on you, as I have done. If you stick with me I shall try to help you acquire high competence by simple training and only incidentally to your labor of love; on the way you will have to fight some small battles, perhaps; these you will be likely to win; the victories I hope for will not make you a brave, but you will not have to be a coward looking for an ideological justification to escape any sense of shame. Intellectuals can be at peace with themselves. There is no law of nature against that—as long as you remember that there is no full insurance against failure, only commonsense safety-rules.
The cowardice I keep discussing with you is somewhat more abstract than, say, that of business people who would not take risks when they try to speculate in the stock market, much less that of a soldier in the battlefield. Individual cowardly entrepreneur seldom express sympathy with other individual cowardly entrepreneurs; they would hardly ever jump to defend cowardice with the aid of an ideology. The same goes, more emphatically with cowardice on the battlefield. The defense of cowardice is unique to Academe, and it is common there, as the following anecdote may illustrate.
In the early days of the twentieth century, Bertrand Russell did some interesting studies on the foundations of geometry. It led him to criticize the doctrine of Immanuel Kant as expressed in his celebrated Critique of Pure Reason and elsewhere. In a discussion ensuing a lecture in which Russell expressed his criticism, a member of the audience said that in defense of Kant, we must not forget that he was always exceptionally kind to his mother. Russell replied that he refused to believe that humanity is so wicked that kindness to one’s mother is a quality more scarce than the ability to develop new ideas about space and time.
Here the cowardly ideology shows itself in all of its folly: the person in the audience assumed that Russell’s criticism was an expression of contempt: he wished to keep a humane balance of contempt by mitigating it somehow. Both these assumptions are unwise; one has to explain their hold on academics. My aim here is to observe that it does have a tremendous hold in Academe, and that very few are utterly free of it. As Plato said repeatedly all his long life, as criticism helps getting free of some error, to be a target of criticism is fine. Go and tell this to the heathens. The widespread ambivalence towards criticism finds its clearest expression in the readiness to accept it from a celebrity but not from a cheeky upstart like you.
Johannes Kepler noted that even important people may use hostility or pomposity to cover for their intellectual weaknesses: “… many people get heated in their discussions …”, he says in his defense of Galileo; “… others try to impress people with too much grave solemnity but very often give a ridiculous impression quite unintentionally …” Here Kepler has the last word: humor, he said, is “by far the best seasoning for any debate…”; sincere interest and frank readiness both to acknowledge the superiority of others’ work over one’s own and vice versa are also important; but above all, the ability to gratefully acknowledge others’ valid criticism of one’s error is what counts. So says Kepler; and every reasonable person should concur.
Kepler’s is the last word on the subject of the present section: intellectual cowardice breeds hostility and pomposity while sincere curiosity breeds intellectual courage. Hence, it is obviously and comfortingly less important to fight cowardice and its manifestations than to develop honest curiosity. There is no more to add to this insight of Kepler’s, and so this section is over. What needs further explaining, perhaps, is why Kepler’s rather obvious point did not prevail. Or is it obvious? I wish I could pretend it were not. All too often, a point I wish to make seems to me so obvious I am ashamed I have to state it. What is more obvious, after all, than that we all err, scientists or no scientists, researchers or no researchers, academics or others, and that those who show us an error of ours open for us the door to improvement. Do I need to remind you that had King Lear listened to his critics, who were devoted friends, it would have averted disaster? Need I tell you that in the preface to his Apple Cart Bernard Shaw predicts the downfall of the dictators of both Germany and Russia—then at the heights of their careers—due to their burial of all their critics. (Unfortunately, his prediction was too optimistic; but I resist the temptation to digress here into political philosophy.) The point I am making has appeared already in all sorts of literature, in all possible fashions. Plato’s Gorgias says it already crisply and vividly. (I greatly recommend that slim book to you: it is most delightful reading.)
Yet, people fear criticism; people hate criticism; people build institutions and customs to prevent explicit criticism. (The claim that Popper’s methodology reflects existing institutions of scientific research is regrettably an oversimplification, you remember: science is unique in its encouragement of criticism, but it is not consistent on this.) People advocate and practice pomposity to the point, Kepler has observed, of becoming ludicrous in their desperate attempts to prevent or avert valid criticism.
Karl Popper was my teacher. He has offered the theory that what distinguishes scientific theories from other theories is that they are amenable to attempts at refutation. To use Faraday’s idiom, refutations comprise empirical criticism. Scientific confirmation or scientific positive evidence, Popper added, is failed criticism (and not the fruit of uncritical efforts to prove a theory correct). He has suggested that we learn from experience not by the search for evidence that supports a theory but by trying to criticize them and thus transcend them. This way he has earned an odd reputation: his friends and acquaintances in Vienna Circle declared him a fellow who cannot mean what he says since he advocated constant and endless quarrels among scientists. Moritz Schlick, the leader of the Vienna Circle, considerer Popper’s doctrine masochistic, since only a masochist would wish his views validly criticized. A few reviewers of Popper’s books have later hit upon the same brilliant idea. It probably never occurred to them that though we may wish to be infallible, since we know that this is not the case, we might deem second best any valid criticism of our errors. Since already Socrates of Plato’s Gorgias has made this point, and with all the clarity and vividness possible, one might expect the great light of the Vienna Circle to have been a bit more up-to-date.
Reviewers of Popper’s book refused to see his point. Unless they had strong reasons for this, they were fools. Popper himself said repeatedly—and erroneously—that one should try ceaselessly to present one’s critics as well as one could before one criticizes them and before answering their criticism. It seems to me amazing that Popper’s works in this direction are not devoid of all interest and are even occasionally amusing to some extent: it only shows how hard it was for him to be boring or unentertaining; yet I cannot recommend to the reader his work in this direction, since it is much too sophisticated and it overshoots its mark considerably. One may indeed study these haters of criticism—their ideas are not more primitive, after all, than other systems of thought that have become topics of quite intriguing studies, for example, the systems of thought adopted by the inhabitants of the Pacific Islands or of the Vatican; but to reason with primitive people and to study them are two very different matters. On second thought, I am not very fair in mentioning the Vatican in the present context, since it had for a Pope a person like John XXIII who would find it easy enough to converse with Kepler—more so I dare say, than any critic of Popper who found Popper’s advocacy of criticism masochistic or cantankerous. To whom should we compare those who find in the welcoming of criticism chiefly a form of perversity?
It is hard to say. So let us begin with the bottom of the ladder, with the traditional officer and gentleman, that is, who could challenge to a duel and kill in cold blood any of his critics regardless of the importance and of the correctness of the critic’s observations. Such an officer and gentleman appears now only through literature; his hallmark always was pomposity. At least literature has won and ousted dueling by means of leading to public ridicule. The equation of the inhumanity of the dueler with his pomposity and the contempt of them are the themes of Pushkin’s Eugene Onegin of 1833 and of Lermontov’s A Hero of Our Time of 1841, not to mention Joseph Conrad’s derisive “The Duel” of 1908. Perhaps the pomposity of the dueler is much too excessive to illustrate Kepler’s point, and you may think that the modern professor will not serve as a good illustration for Kepler’s point. See then Erik Erikson’s 1958 Young Man Luther:
Take the professor … A strange belligerence … leads him to challenge other experts as if to a duel. He constantly imputes to them not only the ignorance of high school boys, but also the motives of juveniles …
What is the difference between the dueler and that professor? Admittedly, the dueler is more harmful than the professor is; this is a matter of difference of the institutional means available to both to defend themselves against criticism. As a defense against criticism, however, their reactions do not differ from that of a geography teacher I had in elementary school. He read from his text, “Burma is the second largest rice producer”. Someone asked, “Teacher, which is the largest?” He snapped back, “Shut-up!” His response was short and effective; if provoked more, he might have reacted more violently. My teacher in university who had to impart to me the mysteries of quantum theory was not as ignorant—he was an authority—but he had to silence his students too; it took me years to see that he could not answer our questions. Presumably, he could not admit this openly.
Here at last we do come to a fundamental difference between the dueler and the professor; the dueler defends his honor, whereas the professor defends the cause of science or some other noble cause; in this respect my elementary school geography teacher—gentle and kind as he was—may be classed with the officer and the gentleman, yet the professor may be classed with the dogmatist and the fanatic.
You may easily challenge this distinction. The poor elementary school teacher who has to teach a subject in which he has no interest was acting in self-defense when caught not having done his homework. Is the very up-to-date professor not acting similarly? Since professors claim to have done all their homework, they can claim to know all that is worth knowing—which is a better line of defense; and this is the same as allying oneself with science. “I am science” is as pompous as “I am the state”, if not more so. Perhaps the geography teacher was defending his position and authority as a teacher—for the benefit of the school and its pupils; the officer was possibly defending his uniform; the gentleman may sustain the social order; the professor defends eh commonwealth of learning.
It is hard for me to argue the pros and cons of the last two paragraphs. The problem is too intricate and the returns seem to be diminishing fast. I can see that I am boring you, and I cannot blame you; I have myself agreed that Kepler was quite right: we need not elaborate the negative side here but may dwell on the positive.
To a positive instance then. The lifelong debate between Einstein and Niels Bohr is a classic. It fascinates outsiders not only because of its philosophical content, not only because it displays satisfactorily (to outsiders, not to physicists) that unanimity in science is an overblown myth, not only because it is about some of the most deeply felt transitions in the twentieth century, not only because it was conducted by two people who were famous in their own rights; the debate fascinates outsiders in the manner in which it was couched because of the display of a determined and persistent intellectual opposition on a large scale between two people of almost saintly dispositions, of unusually amiable nature, who belonged to the most exclusive mutual admiration society of their century whose membership consisted of these two alone: for Bohr, Einstein was science impersonated; for Einstein, Bohr was musical—even more than Kepler. (Yes, I am speaking of the music of the spheres again, of course; but the expression “musical” is Einstein’s: his application of it to Bohr appears in his scientific autobiography.) The outsider has the picture right; it is the ordinary physicist who shows a surprising lack of interest and appreciation and is more than willing to dismiss the whole debate with the feeling that, evidently, Bohr won the debate to the utmost satisfaction of everyone. In addition to being a narrow-minded view of the matter, incidentally, this is false: both parties seem to have lost—in the sense of having held opinions whose shortcomings have meanwhile become apparent, not in the sense of having lost a duel or a contest. Let me conclude this point with the observation that the proper spirit of the debate between Einstein and Bohr has done more for the advancement of learning than the content of the debate that, regrettably, towards the end of their lives turned stale.
It is hard to avoid pomposity and hostility. I do not think I can help you—or myself—to be above it as much as Spinoza or Einstein. All I do is suggest that Popper’s methodology may help us overcome some of the worst obstacles on the way to reducing pomposity—obstacles that have no business to be still around so many centuries after Kepler. It is not that without Popper’s methodology we cannot remove them, but that with its aid the task might be easier to perform successfully. For the root of the difficulty of overthrowing certain pomposities and hostilities is indeed methodological.
This section is boring. Let me briefly remind you that the over-confident in their own rightness express implicit contempt towards their opponents; I have discussed this corollary, which Bacon stressed, and you can easily see for yourself that this is equally true of matters civil or moral as of matters scientific or religious. In matters other than philosophical, it is so obvious, even Josephus Flavius, by no means a great light, has already noticed it: in the very opening of his Jewish Wars he says, some Romans tend to belittle the Jewish army, not noticing that thereby they also belittle the Roman army against which the Jewish army held forth so well and for so long. What is so obvious in that context should be equally obvious in scholarly context. Perhaps I can interest you a little bit more in this so obvious point, then, by a somewhat piquant observation of a pompous element in Popper’s philosophy itself. I cannot say that this philosophy is very popular, nor do I contend that the pompous element in it is in any way essential to it; what I would say, however, is that inasmuch as Popper’s philosophy has gained any popularity, it is that part of it that has become better known, which I consider pompous. It underwent vulgarization in an even more pompous fashion than its original version, but this is less interesting or piquant since deterioration through vulgarization is the norm.
Popper’s earliest philosophical studies consisted of work on two problems, of the demarcation and of the methods of science. I have told you already what his theory is. He says the method of science is, in general, the method of criticism, and in particular, of criticism in which new experiments serve as new arguments against the best theories extant. We learn from experience to correct our errors. As to the demarcation of science, the question, what theory rightly merits the title “scientific”, the answer to it I have quoted in the beginning of this section; it is almost present in my last sentence above: a theory is scientific, says Popper, if and to the extent that it is open to attempts at criticism of it—by recourse to experience. Whether the attempt be successful or not seems less relevant than whether it is at all possible. Thus, says Popper, you may try to criticize Einstein’s theory of gravity by designing certain sophisticated observations, such as Eddington’s observations. Thus far, experience refused to condemn Einstein’s theory; if it is true, this will never change; it is nevertheless conceivable that the theory is false, and, if so, quite possibly one of these experiments will one day come out with results significantly different from those that the theory leads to anticipate. Thus, Einstein’s theory is in principle open to empirical criticism and so it merits the title “scientific”. Not so Freud’s theory. Nobody ever deduced from Freud’s theory a prediction and then check that prediction. The ease with which Freud and his followers could fit facts into their schemes ensures success a priori; there is no hope for an opponent ever to pin down Freud, Popper said, and thus to force Freudians to change their minds in the light of experience. Hence, Popper concluded, Freud’s theory does not merit the title “scientific”.
>All this is just terrible. It is hardly possible for me to understand how Popper could stick to his view that his ideas on the topic as significant; it is hardly possible for me to understand how he could ever mind which theory merits the title “scientific” and which not. Such a pompous question! I confess, I did find this question very important. I can explain psychologically and socially how I came to be interested in this pompous question and how it ceased interesting me, and I think I may generalize. My explanation is that I got interested in such a pompous question since it is rooted in the role of pomposity in our society. Let me elaborate on this. The question how and why our society maintains pomposity I shall discuss afterwards. Let me begin with Popper’s response to my challenge.
Popper always said, words do not matter. The word “scientific” is no exception. He said, his study of the demarcation of science is important as it answers Kant’s question, what can we know? The answer to this question is Socratic: we cannot know where the truth resides, but only where it does not: we can refute some hypotheses but verify none. This answer is terrific; it presents what we know as science as a version of Socratic dialogue. I do not know how you find this; speaking for myself, it is very much to my liking. For myself, allow me to add, this is what in Popper’s theory of knowledge appeals to me most. (This is why I consider myself his disciple.) It is not the whole of his theory of knowledge. In particular, it is a rejection of his theory of pseudo-science, his condemnation of the ideas of Marx and of Freud. If you want to condemn these, you need better arguments.
The core of Popper’s theory is admirable. As long as you are an ignorant young rebel, all well-wishers, and all those who think you may be a good student (whether they wish you well or not), start quashing your rebellion by an appeal to the authority of science. The authority of science may or may not be bigger than the authority of religion, of society, or of parents and teachers; it matters not, since the bigger the authority, the bigger the menace to freedom it is; and inasmuch as science is authoritative, we should simply condemn its authority in the name of freedom. Thus, efforts to quash rebellion in the name of the authority of science simply seem to side with science, but they side with those who condemn it. Let me be quite clear on this point. It is my honest and considered opinion that no human venture and no human conduct is free of evil, and hence there is evil in science. Popper’s theory presents science in such a way as to define all the evils possibly associated with science—such as the dogmatic pomposity of my science professors—as one way or another not part-and-parcel of science, but mere accidental accretions to it. To come back to our young rebel (such as Popper was, for instance) who accepts the authority of science—in ignorance, inexperience, and fear of seeming too quixotic. Once the authorities have bullied you that far, it seems only reasonable to check their credentials; once you wish to make your checking of credentials effective, you ask yourself, what makes for kosher credentials. This is precisely the problem of demarcation of science.
>Do not sneer at the pompous guardians of the status quo too easily; many of them were young rebels suppressed by earlier pompous guardians. This is, again, the traditional-Chinese mother-in-law syndrome; you may profit more from studying its mechanism and ways of neutralizing its ability crushed you than from sneering at its victims. Remember: this is a report of an empirical observation: the more violently the daughter-in-law fights her mother-in-law, the more she is prone to become in her turn a dreadful mother-in-law herself. Such things did happen, and they still do, although how frequent they were is unclear since the exceptions were kept secret.
Popper himself rebelled against claims of Freud and of Adler for the authority of science. To criticize young rebels is unfair: there are so few of them, we have to encourage them first. Still, Popper was in error. When he spoke of Einstein’s general theory of relativity, he was as clear as possible: it has a canonical version. Admittedly, there are variants of the theory (with its lambda equals zero or not), but this does not affect its scientific status. When Popper spoke of Freud’s theory, however, he never specified what it is and it has no canonical version. Admittedly, this is a general defect. Adolf Grünbaum has devoted a whole volume to the criticism of Popper on psychoanalysis, yet he had no room in it for specifying the content of the doctrine (that he also rejected, though for the lack of empirical support). This is no small matter, since a major item in his theory was the place of catharsis in it. The neurotic patient fears the memory of the trauma that has generated the neurosis; the end of the psychoanalytic encounters is for neurotic patient to overcome this fear; when this occurs the patient experience a kind of excitement that Freud called catharsis and is thus ready for self-cure. Freud sent home such patients, only to discover that they were still neurotic. He then changed his mind. This renders scientific à la Popper the version of Freud’s theory just worded here. That version, however, is not canonical. What version is? I do not know. Popper was thus more than justified in studying the nature of claims for scientific status. His conclusion in this matter is a variant of the one that has been made a century earlier by the great William Whewell, who said that experiments that cannot possibly lead to the elimination of a theory, cannot possibly lead to its confirmation either. Whewell was not as clear as Popper was, and he was in error when—quite unlike Popper—he ascribed authority to scientific confirmation, in the belief that a properly confirmed theory can never be overthrown. Yet Popper, like Whewell, handled together the demarcation of a scientific theory with the demarcation of scientific confirmation.
It is hard to disagree with Whewell, Poincaré, and Popper: proper confirmability without refutability is too cheap. It is harder to disagree with Popper that loose confirmation is always available. We have now narrowed considerably the range of scientific confirmation and thus the range of the authority attached to it. Why then do we attach authority to confirmation? Most philosophers say, because we learn from it. Popper rejects this; like Socrates, he said, I hope you remember, that we learn from experience by empirical criticism, by refuting experiences. We learn from confirming experiences that we have failed to refute, that in efforts to refute we should look elsewhere. Yet for about thirty years Popper never said what is the good of confirming experiments and whether it is authoritative, and if so why. I admit that an unprejudiced reader may concur with impressions shared by most philosophers of science I have come across, namely that Popper did ascribe some authority to confirmation.
Now we all treat some confirmation as authoritative; perhaps those who viewed with disdain Faraday’s assault on so very well confirmed a theory as Newton’s spoke with the authority of science; perhaps those who resented Einstein’s upsetting of all physics (at least prior to the confirmation of his own views), or Planck’s, or Bohr’s, also spoke with the authority of science. If so, then that authority is plainly evil. Paul Feyerabend said, people committed much evil in the name of science.This won him much acclaim. The acclaim was due to the reading of his text as opposed to science, which he carefully allowed for. This is cheap.
Of course, Popper did not have in mind the authority that Feyerabend opposed; which authority, however, he did have in mind, he did not say. He was touchy on this point, and I could seldom talk to him about it; most of the time, I regret to admit, he was not on speaking terms with me—he insisted it is not because I disagreed with him, you could guess, but for quite different reasons. I am sure you take delight in gossip and I would not deprive you of the pleasure; but I do not quite know the details. Popper had assumed that I knew the details that made him angry with me, because he took it for granted that some mutual friends who were academic guardians of high standards had informed me: if you have a guardian of high standards for a friend, who needs enemies. The real cause for his hostility is that I published my criticism of his views rather than let him detract his error due to personal contact as befits teacher and student relations as he saw them.
Again: confirmation is authoritative; yet those who look for the authority of confirmation will not find it in science. It lies in politics, more specifically, in the law of the land. The law of the land should not appeal to the authority of science, as this does not exist. To be explicit, the law cannot appeal to personal convictions, as the democratic principle of freedom of conviction guards them; the law cannot appeal to researchers, as scientific research must stay free. The law can appeal to applied science, to the applications of science. The law (in modern democracies) demands that claims for the benefits of new applications of science must be confirmed, because this way we eliminate new applications of science that are harmful before they cause too much harm. We do not always succeed, as the case of thalidomide proves. Perhaps the Nazi philosopher Heidegger was right when he declared applied science detrimental for the future of humanity. We just do not know. We hope he was in error and we will do our best to refute his view. For this we should see that the authority that confirmation has is the authority that we bestow on it. This will facilitate criticism of our methods of confirmation and this their improvement.
To show you how possible it is that interest in confirmation may be due to public pressure based on the authority of experience, we may take the case of the person who did confirm his own unorthodox views and did feel proud of his ability in this direction, yet who never thought any confirmation is more than of a transient heuristic value (“heuristic”, akin to “Eureka!”, means pertaining to discovery; heuristic value is possible usefulness for research-purposes). I am speaking of Galileo. He thought Archimedes’ work was as geometrical as Euclid’s and thus equally doubtless; yet he did confirm it with a most impressive experiment when court intrigues in 1612 forced him to do so in order to save his job as a court scientist: he showed that a piece of wax with some copper dust stuck to it may float on water yet sink when one single additional speck of copper is stuck to it. He held in principle the same view of the Copernican hypothesis and hence considered his own telescopic observations as well as his own theory of the tides to be confirmations of it—and as such he viewed them as useful; for the interim period and for social reasons mainly—in order to convince the heathens. Inasmuch as they had intellectual value of greater durability, Galileo saw them as criticisms, as a shaking of accepted authority, rather than as lending authority to his own view. Even in his wildest dreams he desired to achieve only permission to believe in the Copernican doctrine and to have free debate; even in his wildest dreams he admitted that the Copernican doctrine had not yet been logically proved—as it should be, he said; with all his powerful sarcasm and raging bitterness he hardly ever expressed in writing a single statement with authoritarian overtones. Individual autonomy was what he preached.
To return to Popper. It is a biographical fact, beautifully narrated by Popper in his “Personal Report”, that he developed his view on scientific confirmation and on the demarcation of science to break away from the authority of Alfred Adler, under whose tutelage he worked educating deprived youths in Vienna as a volunteer when he was a young lad in his teens. Adler tried to impress Popper with his “thousand fold experience”. Yet the “Personal Report” is not sufficiently anti-authoritarian to my taste, since it merely shows that the credentials of the applications of the theory of Adler are not half as kosher as those of the theory of Einstein. He evidently deemed Einstein’s view authoritative. I do not.
As examples of theories that make claim for the title “scientific” quite without justification, as examples of pseudo-scientific theories, that is, Popper has chosen astrology, Marxism, Freudian psychoanalysis and Adlerian analytic psychology and, though by implication only, theology proper too. I say by implication only because Popper was vague here, as I have tried to show elsewhere: he has never clearly distinguished between pseudo-science and non-science, and yet it is a clear fact that nowadays, unlike in the Middle Ages, theology is non-scientific yet it is hardly pseudo-scientific: it makes no claim for the title “scientific”. In his early works, at least, Popper displays a tremendous ease with which he was willing to dismiss all pseudo-science, and in the same breath theology proper. Later on he showed more tolerance to theology and even respect for some metaphysics; he was adamant in his contempt for pseudo-science. Is all pseudo-science as contemptible as astrology? Is ancient astrology as contemptible as current astrology is? He did not think so. The affirmative answer to this question is too pompous for him, even if only by implication.
Admittedly, Freud and Adler had the mark of the phony in them; Popper has rendered a great service in pinpointing this, and in showing the imperviousness of their ideas to criticism. After all, Freud and Adler did not argue in exactly the same detached manner of Einstein and Bohr; Popper has made much of this fact. Yet to class psychoanalysis with astrology on the strength of this fact is excessive.
Example: in Psychopathology in Everyday Life (1901), Freud reports having rightly suspected that a woman had marital troubles as she mistakenly used her maiden name. This statement is pseudo-scientific: one cannot generalize it. Yet Freud’s observation was astute. In detective stories, this kind of observation serves as a clue, although it will rightly not hold in court. One may propose to view a clue as confirmation when it turns out to be right and to ignore it when not. This is wrong: it is better to say, at the time it looked to me important, yet it is not. The detective knows that this is possible, and this is the reason for the search for better evidence. Freud did not behave as honestly the way the upright detective of the classical detective novels did. That is strange, because Freud himself rightly, analyzed this way the treatment of premonitions that he dismissed as superstitious.
>We face two problems here. First, what is the problem of demarcation of science sans snobbery? The answer is obvious: why do we appreciate science? Second, what is the authority of science? The answer here is also obvious: the law recognizes general claims after they undergo scientific confirmation. Why? The answer here a little less obvious. To see this we should consider not success but failure. Consider inquests. In an inquest into a disaster, we expect them to answer the question: was the accident in question due to some negligence? This is why standards of confirmation have to improve. For example, only after the disaster that thalidomide had caused, the suggestion came up to test drugs for possible harm fetuses. Only then it became negligence to harm fetuses by medications; not before.
Adler’s theory has drawn much less attention than Freud’s, and for a few obvious reasons: it is less strikingly original than Freud’s, being (intentionally) a variant on it; also it is less provocative and more in accordance with common sense, since Freud crudely equates love with sex, whereas Adler equates it with acceptance or approval in a much wider social sense (sex included, of course). Freud’s theory is deeper, in the sense of being more Darwinian, in its basing conduct on two basic biological drives—for food and mating. Freud believed in the reduction of all science to causal theory and much appreciated the Darwinian causal theory of natural selection. However, he noticed that we still lack a causal theory of basic animal drives; taking upon trust that this theory will appear one day, he allowed himself to use it. Adler’s theory is more worldly and it leaves fundamental questions open quite deliberately. Moreover, Freud’s theory is much more piquant, and avails itself for a wide variety of applications outside of psychology, from parlor games to high literature and literary criticism. The worst thing about him is his view of women, as both Melanie Klein and Karen Horney have argued in detail.
Adler’s merit is there all the same. The hysterical Victorian aunt moved under his impact from high literature to melodrama to burlesque to oblivion; you are most unlikely to have an aunt who, feigning shock or feeling neglected says the famous punch-line, “Darling, I think I am going to faint.” Your grandparents still had such aunts; not you. Attempts to draw attention by fainting, temper-tantrums, and similar gags, were good enough for Napoleon Bonaparte and for Ludwig Wittgenstein; today they are considered too silly to be of any use, and those in dire need to draw attention have to use less obvious ones (such as hypochondria or mild paranoia). That is Adler’s prime achievement. It was quite considerable. For, the fainting aunt was much more miserable and much more of a nuisance than a first glance may suggest.
(The same considerations go for conversion hysteria, to wit, hysteria with symptoms of physical illness and paralysis. Since this was the object of Freud’s early studies, much interest and a vast literature exists concerning its disappearance. I do not wish to bother you excessively with psychiatry.) The drive for recognition is still there. According to Adler, it is a version of search for attention and for care. Adler was concerned with the pathological manifestations of this drive, the so-called Inferiority Complex that he considered a proper substitute for Freud’s Oedipus Complex as the primeval neurosis. Both complexes are observable; the disagreement here was metaphysical: which complex is basic? (Freud, On the History of the Psychoanalytic Movement, 1917.) Adler was limited in his effort to amend Freudianism rather than build an independent system or rather solve problems unsolved or unsatisfactorily solved by Freud. Following the line of Freud’s Psychopathology in Everyday Life, of Freud’s insistence that the distinction between the normal and the abnormal is a matter of degree, Adler found inferiority complexes everywhere. On this Popper has scored a clear point: the literature admits his critique, even though no always expressing the gratitude to him that we all owe. They overlook his perceptive remark that in all likelihood Freud was a Freudian type and Adler was an Adlerian type.
The Freudian claim (to be precise, Jean-Martin Charcot and others made it earlier) that mental abnormality is a matter of degree, is a profound insight and a challenging thought, but quite obviously a half-truth at best. It is an open secret that psychoanalysis of any kind has abysmally failed in matters of psychosis or mental illness proper (whatever that may be) and in other strange phenomena such as epilepsy. Although quite a number of writers have commented on the connection between Freud’s theory of degrees of abnormality and this failure, it still invites exploration. My point here is different: it is perhaps the comment of Buber and of Sartre on Freud’s theory: that theory is a half-truth, at best, in that it has no room for the study of the workings of the sense of responsibility. We all agree that a child is neither responsible nor irresponsible—that a child cannot be held responsible, to use the legal expression. In addition, traditionally, the law puts the mentally ill or the deranged in the same category as the child, together with the mentally retarded. What Freud has claimed is (though he was himself ambivalent on this point) that we all are child-like in spots, so that the difference between the child or the neurotic and the responsible adult is a matter of degree. Here he is at best only half right since there is a crucial difference between those who lapse into child-like inability to face responsibility in spots and those who are constantly child-like, either from not having grown up or from having relapsed all round.
When Dr. Thomas S. Szasz declares that mental illness is a myth he had a complicated message. When we pay indulgent attention to hysterical fainting, Alfred Adler had observed, we encourage the development of hysteria. Similarly, says Szasz, we encourage schizophrenics to act irresponsibly in order to make us place them in the category of those whom we do not consider responsible agents. Others, such as Sheldon Glueck, attempt to reconcile Freud’s theory of gradation of normalcy with the existence of responsibility (in adults but not in children and not in psychotics) by the introduction of degrees of responsibility.
Correct or not, this affords us a new criticism of Adler, and even of Freud. It is this. The claim that we are all children who constantly crave for care, attention, acceptance, approval, support of others, and what have you, harbors a serious confusion of the dependence on others that the immature display with the need of the mature to face society and to belong to a society—as a coordinating system, as a standard for what is reasonable and responsible, as a standard of sanity. Here, perhaps, we even touch on a serious error of Schopenhauer that Freud may have inherited from him. Schopenhauer saw an affinity between madness and genius. This is terrifying regardless of evidence for it and against it that constantly accumulates. We must admit that genius may find the strain of loneliness too great, both psychologically and intellectually. Nevertheless, the genius is mature in the sense of having a sense of responsibility, even though in loneliness the genius may need a greater maturity of judgment and thus have to undertake great efforts to achieve it. As geniuses seek the normal assurance from others’ responses, they find it harder than usual to attain it; and then they may but need not lose their sense of purpose. They may then find themselves in need to sacrifice ordinary companionship. The romantic philosophers whom Schopenhauer wished to combat declared this the test by ordeal that they viewed as imperative for geniuses to pass before they may deviate from received norms, since only then are they able to contribute their innovations. The romantic theory sounds radical but it is conservative in its permission to allow deviation only after passing tests by ordeal. Hence, Freud and Adler were romantic against their wills; they were all too ready to identify genius with mental aberration.
The wish to have recognition or acceptance fits the ideas of Freud and Adler; and so all of us fit them to some extent. This goes for all occupations, and manifests itself differently in different ones. Academics who lapse into the neurotic need for acceptance may distort or ignore an idea or an experiment, more simply, they may plagiarize or become pompous defenders of the Establishment, especially while developing neurotic self-mistrust like the professors whose honesty prevents them from defending poor students against obvious injustice. Common as the phenomenon that concerns with desire for recognition may be, it is nevertheless cheap to pretend that all its manifestations belong to one simple category. There is too little written on this subject to allow me to elaborate at any length, but I can barely refrain from briefly referring to the cases of David Hume and of Claude Debussy. Both felt the problem and frankly expressed bewilderment concerning it. Hume was exceptionally honest and unusually at peace with himself. In his brief autobiography he confessed the motive for his writing was his love of fame, yet he also reported there he was a happy and contented person although his philosophical work “fell stillborn from the press”. The case of Debussy is similar. He had a reasonably high opinion of his own output. He did not think much of public acceptance or rejection, especially since he was unusually familiar with the history of music and knew how much great music remained unperformed for lack of technical possibility to perform it all (this was before the invention of the phonograph). He was sufficiently open-minded to recommend publicly that the Paris Opera gave the best possible performances of Wagner’s operas although he sincerely loathed them; and he enjoyed music of both the highbrow and the lowbrow sorts, judging each piece by its composer’s intent. He viewed aggressive public rejection of a composition as a great honor bestowing severe obligation on its recipient; yet he admitted that he was quite ambivalent towards signs of recognition, feeling averse to the pomposity, boredom, and vexations it entails but being unable to “be insensible to that little ray of provisional glory”.
As self-critics, both Hume and Debussy were too ruthless. Hume’s claim that his motive was a love of fame comes from Bacon’s philosophy, and Debussy too, was subject to the influenced of the same stock of ideas, through second and third hand. Joseph Priestley, no less, dismissed Hume, saying, one should not expect much from the pen of a philosopher who shamelessly admits being motivated by the love of fame. Already Kant expressed his surprise at the cavalier manner in which Priestley dismissed Hume (he rightly admired both of them). As to Debussy’s critics, the less said of them, the better. To show that their excessive self-criticism was mistaken, we may use the case of Einstein. He received as much fame and glory as any scholar could ever get from peers. It embarrassed him deeply and he did not wish for more. Yet, he confessed, he was always lonely, and early in his life he suffered from this loneliness. Approval he received, but a society to give him standards of reasonableness he seldom had, and he benefited from the company of very few people who could match him, such as Planck and Gödel. What Schopenhauer saw, then, in the similarity between the genius and the lunatic is there all right—both constantly struggle, as they have no accepted standards to rest on; but the one is above accepted standards, the other below. Small difference. Saul Bellow’s Henderson the Rain King (1959) observes:
when I say that he lost his head, what I mean is not that his judgment abandoned him but that his enthusiasm and visions swept him far out.
Here we have come, quite incidentally, to the kernel of truth and to the obscurantist aspect of the philosophy of Ludwig Wittgenstein, one of the most popular philosophers of the twentieth century and of its greatest prima-donnas; his closest pupil and biographer Norman Malcolm, viewed him as casualty of both a touch of genius and a touch of madness. Philosophical doubts, said Wittgenstein, are spurious because they are peculiar: unlike other doubts, those that touch upon specific topics, whereas philosophy touches upon the very framework of our activities, intellectual and otherwise. All his life he spent enormous efforts to show the obvious, namely that philosophical doubts are indeed peculiar in their generality; he said almost nothing to support his claim that hence we should ignore philosophical doubt. What he did say, namely that we constantly do operate within socially accepted standards—even in our innermost thoughts—is admittedly usually true. Yet when we look at the greatest geniuses among us, at its Bacons and Descartes and Humes, at its Debussys and Einsteins too, we see people who try hard to break away from accepted frameworks in efforts to create ones more adequate for the problems that they struggle with. They may or may not suffer loneliness and pains; these pains may be necessary or unnecessary; they are, however, all too understandable. For them Wittgenstein’s cure—indifference to fundamental problems—will not do at all. Fortunately for us, the Einsteins among us totally ignore him.
We are almost ready to return to the opening question of this section. Let me remind you of a brief anecdote about Russell’s discussion in public of the theory of space and time of Immanuel Kant and the member of the audience who observed in response, that Kant loved his mother. Now this observation is hard to defend but it is easy to empathize with. Most of us feel criticism as censure and rejection. The reason may very well be that as children we tend to be stubborn and ignore criticism unless it comes with great pressure. As children, we suffer both from the lack of attention and from the lack of standards—we feel a loss and we feel at a loss. These are two very different pains; in our trauma, we tend to fuse them. Freud has never distinguished them. He was the last believer in the noble savage, and when he feared that the ideal of the noble savage did not suffice, he simply plunged deeper into his Schopenhauer-style pessimism. The part of the received standard that he endorsed, especially concerning everyday matters of fact, he deemed as belonging (not to society but) to the ego. In this, again, he was deeply and quite mistakenly under the influence of Bacon. The very idea of the noble savage, of people not infected by common superstitions and so intellectually superiors, is a Baconian myth.
For the purpose of emotional hygiene, there is admittedly not much need to distinguish between the child’s loss of attention and loss of guidance that come as penalties for stubbornness; but for the purpose of intellectual hygiene the distinction—between being lost and feeling lost—may be important. (This echoes Buber’s critique of Freud: he did not take notice, Buber observed, of the difference between guilt and guilt-feelings.)
As far as child-psychology is concerned, most experts recommend the avoidance of traumatic events in a child’s life. We might expect them to have some impact and cause some improvement here. Oh, no! We have our old acquaintance the ideological academic bore pompously guarding the institutions fostering damage and insisting on penalizing every stubborn rebel. If you criticize a public figure openly and frankly, then the bores will try to make you understand that in doing so you bar yourself from ever gaining public recognition; and if an accepted public figure criticizes you, then the bores will want you to feel that your career has ended. If you escape censure, then the Establishment penalizes you, ignores you, or crowns you as a new public figure. The latter option is not for you. At least not yet: you must learn to be patient and wait for your turn! If you have any merit, you can be sure that we will recognize you in good time, they promise. Do not hate them: they know not what they do.
It is easy to get rid of the pompous bore. All it takes is a social reform. Pompous bores are docile and they will follow the rules even when these are agreeable. Would it not be nice if every person whom a Fellow of the Royal Society attacks in the academic press will be automatically made a candidate for a fellowship in it? This would remove the ambivalence of the Fellow who launches the attack and thus remove the need to justify the attack by including in it the customary expressions of pomposity and of hostility to opponents. This will make it preferable to present criticism openly and render it desirable to be a target for an intellectually decent criticism. This will make Fellows more discerning in their choice of target for criticism and induce in them the disposition to choose opponents their own size. This will present criticism as a precious act to administer with some measure of discrimination rather than as mud to sling.
I may be seriously in error, of course. Still, in my view for what it is worth, the Royal Society was once a tremendous institution, bold and experimental in both its scientific ventures and in its playing its social role as creating incentives for the advancement of learning. You do not think the Royal Society is going to accept my proposal? You think it will meet with pomposity and hostility? Ah, well, you may be right. No matter; we can build our own mutual admiration society, you and I, where members pride themselves on the ability to parry well, riposte well, and willingly admit a hit like any regular fencer. (Fencers know that as long as you have the slightest disposition to conceal that you were hit you are still a novice; the same holds if you claim that you have hit your opponent rather than wait patiently for an admission of it. How Socratic!)
In the meantime, let us attend to the moral from this section. The pomposity and hostility of an academic often result from old pressure by pompous seniors. Fear of criticism leads to pompously authoritative modes of speech plus the expression of contempt towards opponents. This is why it is advisable to try to ignore thinkers’ pomposity and authority, even their occasional display of a phony streak. (The refusal to ignore pomposity is pompous.) Again, the question is not how authoritative is what one says but how interesting and important. If not, it should not be criticized—at least not in Academe; and if it is criticized only because it is interesting and important (though false or at least possibly false), then criticism is a compliment. The ambivalence of academics towards criticism (that various institutional devices enhance) we should laugh out of court the way our ancestors did with the popular custom of dueling. That mature thinkers view criticism as complimentary is manifest in various cases, especially from the dispute between Einstein and Bohr. Still, so many thinkers pompously declare that Pasteur’s verbal violence towards his opponents is impossible to compare with the violence of the officer and gentleman, and that we cannot possibly try to emulate such giants as Einstein and Bohr.
Let me be clear: in my view, controversy is the heart of science. This idea is not popular. Even my teacher, Karl Popper, who declared the disagreement between Thales and his followers the dawn of science and who always encouraged controversy, even he did not assert that controversy is the heart of science. He demarcated (not controversies but) theories as scientific and non-scientific. My interest is broader: what makes one controversy important and another dull? For example, compare Galileo and his adversaries to Newton and Leibniz. The latter controversy fascinated even some interesting sworn Newtonians. Einstein, who viewed himself a follower of Leibniz, had to explain why at the time Newton won the debate. By contradistinction, Galileo’s adversaries are remembered, if at all, because he argued against them magnificently. Nobody discusses the ideas of these famous adversaries: they are now gone ignored. What differentiates the two controversies? I do not know; it keeps fascinating me. The adversary of Newton, Leibniz, kept engaging thinkers for generations; not so Galileo’s adversaries.
Thus far, I have spoken against the pompous insistence on the maintenance of certain high academic standards and of the evils that this might incur. Of the standards themselves, I said little thus far, except for the alleged standard of objectivity that academics use to force peers to render their discourse boring. Standards are artifacts; as such, they may be unsuitable. The orthodox adherence to them, thus, may increase their fault and the damage that it subsequently incurs. To return to the historical example I have mentioned in passing, it was Robert Boyle who instituted the demand that speculative papers should include some experimental results.
Boyle did that while he was registering a plea for tolerance. In his Proëmial Essay to his Certain Physiological Essays of 1661, he complained of the existence of a certain dogmatic adherence to the new mechanical philosophy. (This is the theory of Descartes of the world as consisting of purely rigid bodies that can only collide with each other or else move in uniform velocities.) He referred, as an example, to some medical work published some years earlier and passed unnoticed by the learned public because—he suggested—its language was not that of the mechanical philosophy. Historians of science have declared an anonymous seventeenth century work on medicine Boyle’s earliest publication. If so, then he was familiar with the dogmatic dismissal of works and was probably referring to his own personal experience and in his said Proëmial Essay, he was arguing for the possibility that a work not following the mechanical philosophy may be of use even if the mechanical philosophy is true. For, if true, a statement of fact or a theory must be explicable somehow by the true mechanical philosophy. Our present ignorance of this explanation does not detract from its truth. For example, he noted, nobody was then able to explain mechanically the phenomenon of elasticity. Springs are elastic. If we explain the behavior of air on the assumption that the smaller parts of air are springs, we may learn something about air—for example, that (in constant temperature) its pressure is proportional to its compression. (This is the celebrated Boyle’s Law.) Our theory of the air will thus be a theory of elasticity and thus not yet mechanical. However, if the mechanical philosophy is true it must in principle explain elasticity. Hence, the disregard for a study because is cast in a language other than that of the mechanical philosophy is shortsightedness. Even if a theory is false, said Boyle, it may lead to interesting experiments that critics have designed in order to refute it. Boyle noted that in those days there were more people ready to speculate than to experiment. Therefore, he suggested, it is important to encourage experimentation and discourage speculations. So let us agree, he said, to accept for publication any paper that is not too long and that contain some new experimental information.
Boyle did not object to brief speculative works. He published such works, though surprisingly rarely. Soon The Philosophical Transactions of the Royal Society and its French equivalent appeared. They made it a standard policy to publish as little speculative material as possible. To make things worse, natural philosopher (physicist, we would say today) Benedict Spinoza, a rather controversial thinker, submitted to the Royal Society practically at their invitation a paper on the relations between wholes and parts—highly speculative, need one say—and the Society almost published it but did not. Boyle was the least pompous and perhaps the kindest Fellow of the early Royal Society. His uprightness and moral and civil courage were never questioned by friend or foe. Yet he was not brave enough to publish Spinoza’s note. He also showed uncharacteristic pomposity and hostility towards opponents of the Royal Society. True, he welcomed criticism and had never any problems about it as far as he himself was concerned; he was very generous in acknowledging other people’s discoveries and he was extremely frank about his past errors. He was extremely sensitive however to his colleagues’ sensitivity to criticism. He repeatedly explains that criticizing a colleague openly might cause resentment and so lead to mere discouragement. Since Boyle’s philosophy of the social foundation of science was very clear on this matter, since he clearly thought that only amateurs could be the disinterested carriers of the burden of research, he found most important the need to entice people to invest efforts in research. Not only in matters of criticism but also in all other respects, his aim was to make people perform experiments and publish their results. One of his subtitles is “an attempt to make a chymical experiment” that now sounds funny, coming as it did from the best and the most respected chemist of his time. He proscribed criticizing an opponent openly; the ones who had held the criticized doctrine will know that the new experiment refutes their view; there is no need to publicize this. Twice Boyle deviated from this early in his career, in the days in which both his new law the new Royal Society were making a big splash and it was terribly important to see whether anything in these excitements had lasting impressions. Under such circumstances, and considering that Boyle was criticizing his own critics, there was no need to explain or justify his conduct. Yet he did. He said that he loathed launching an attack against his sometimes friend, the Cambridge philosopher Henry More, but that he had forewarned More clearly beforehand that if More persisted in calling Descartes an atheist he would criticize him. This led More to complain (in a private letter) that Boyle had thus turned criticism from something required by the logic of the disagreement to a form of social sanction. Superficially, the situation was obvious: Boyle never openly criticized any Fellow of the Royal Society, but he criticized one member of a Catholic order and two philosophers who were of the most reputed non-Fellows of the Royal Society. When Boyle wrote against non-Fellow Thomas Hobbes, he stooped so low as to say that he could also criticize Hobbes’ political doctrines as pernicious but he prefers not to do so. One can explain this in terms of loyalty, both to King and Country and to the Commonwealth of Learning (this title, incidentally, is the invention of Boyle). It is unpleasant all the same.
Boyle’s requirement to report experiments without naming the theories that they refute became standard under the tyrannical reign of his follower and admirer Sir Isaac Newton. Newton’s influence was overwhelming—partly because (for two centuries) almost no one doubted that his great success was rooted in his empiricist philosophy and partly because he himself was a dominant character, touchy and unable to face public criticism.
The more stringent the requirement for experimental information was, the less the standard of novelty of that information became. Soon it came to pass that those who wanted to survey a field or to explain known phenomena had merely to report having repeated some old experiments—in variation; tradition then dropped that demand too. There was a great advantage to that requirement, in a world without scientific laboratories, and when universities were alien to experiment. Thus, at least three people who surveyed experimental fields learned to be top experts in them due to the requirement to report experiments. The one is Benjamin Franklin who had neither any training nor any connection with science but who somehow found in his hands scientific instruments and started repeating known experiments, as was the custom. The second is his friend Joseph Priestley who started his scientific career as a teacher and an educator: he felt the need for something like a textbook of science and found none. The third and perhaps last in history was Michael Faraday, a chemist in his early eighteen-thirties, who was asked to survey the literature on a very new experimental subject, electromagnetism. (He wrote the survey for the Journal of the Royal Institute, the institute that was his home and his workplace.) Beneficial as the requirement for experiments or at least their repetition was, it had its bad effects too. The Establishment disregarded speculative writings indiscriminately, including those of Ørsted prior to his discovery of electromagnetism. The earlier ones, of Boscovich, had to wait for the rise of field theory. Ørsted failed to secure a job in the University of Copenhagen because of his disrepute as a speculator; this led to a public scandal that ended by his receiving a stipend from the king of Denmark to build and maintain a laboratory. He later became an associate professor. Those were the days.
I do not know why I am telling you all these stories and in such a fashion. My aim is to make you lose the common disposition to take current arrangements for granted as God-given. My description is too brief to be accurate and too descriptive to bring a moral to you. To correct my story I should say that tradition allowed writing a paper not containing any experiments if it contained interesting mathematics. This raises the question, how new the mathematics had to be. Consider the story of John Herapath, an early Nineteenth-Century thinker who unfortunately was a Cartesian after Descartes went completely out of fashion and whose papers, even his mathematical ones, were repeatedly rejected. Although Maxwell recognized him as the father of modern statistical mechanics, historians of science scarcely mention him. He is gains mention chiefly as the founder of The Railway Magazine, a most important landmark in the history of journalism. I saw his books in the British Museum library; their pages were still uncut; I nearly cut them (the British Museum library thoughtfully provided its readers with paper knives), but finally decided not to destroy such a historical document.
I do not consider the books of Herapath valuable, but that is not the cause of their neglect. His crime was that he partook in too many controversies. This was the general rule. Thus, even Edward Jenner, whose work on smallpox vaccination was obviously important, had some of his works rejected by the Royal Society as controversial. It is so embarrassing that almost all historians of medicine simply attribute to him the discovery of vaccination—he was vaccinated as a child, by the way—and pay him homage to dispense with the need to examine his contribution and its controversial character.
We have advanced a long way since the days of Jenner, you will say. Did you know that young Helmholtz’ 1847 classical paper on the conservation of force was rejected, that a little later Newland’s table of chemical elements, his law of the octave, the predecessor to the Mendeleev table of elements, was also rejected as controversial? We have greatly progressed in the last century or so, you will say. Therefore, to refute you I would have to cite the case of Winifred Ashby whose work belongs to the period immediately following World War I. She used the recently discovered characteristics of blood grouping to determine the life span of blood corpuscles and the effects of blood transfusion: she simply injected blood of the type O in patients with blood of other types and kept blood counts. She refuted both the accepted views on the effect of transfusion and the then accepted view of the average life span of the blood-corpuscles. The latter was of an immense practical value since if blood cells live only twenty days it was futile to institute blood banks, but not so if they live sixty to one hundred days. (They live one hundred and ten days and more.) She was under repeated attacks from the top brass of the medical profession, especially the top Harvard expert. They asked her to resign her job but she managed to stay. Her tracing method that I have just mentioned is known as the Ashby method, but many writers and historians of blood still ignore her, perhaps because she was controversial, perhaps because she was often in error (though she came nearer to the truth than her opponents), perhaps because she was an intruder into the medical profession—which, to this day is still deemed unforgivable—and surely also because she was a woman.
I bet you know little or nothing of Winifred Ashby; you are not likely to be ready to check my story and you are not willing to be impressed with an isolated case that is not too new either. But if I were to cite more contemporary cases—and you can see I am quite a collector of grievances, perhaps because I have none of my own, perhaps because my view of Academe on the whole is so favorable I fear I may become too smug (even to my own taste)—then I shall have to cite a number of cases that are going on, and I shall contradict arguments that a lot of editors, departmental chairs and their likes offer generously. In brief, history will not do, as we must think of the present. Current standards, which are improvements on yesterday’s standards, have proved their mettle to a surprising extent. You may insist on them.
This discussion sounds reasonable, but only because it rests on a narrow base, thus assuming what sounds as its conclusion. There were other societies, and they were very different. Their intellectuals held other and very different standards, and the upholders of those standards viewed them as very satisfactory too. Incomparable to any other technological achievements as those of our own civilization are, they had satisfactorily functioning achievements too, and yet they did not last. Do you know how well the late Roman Empire functioned? For me, the symbol for its efficiency is the Roman method of erecting marble pillars; on the principle of the cheap cigar: dirt in the middle, surrounded by a layer or two of bricks, and plated with thin marble slabs, often ornamented. That was good enough. The contributions of Herapath too were good enough: they challenged Maxwell to develop his versions of statistical mechanics and of thermodynamics and more.
Sheer commonsense may suffice to require drastic reforms of certain traditions. The Swedish Academy has unwittingly caused enough damage to the intellectual community already. It may be a matter of sophisticated philosophical discussion to consider the Swedish Academy’s refrain from awarding Einstein the Nobel Prize for his theory of relativity; one may view the Academy’s remark, when it awarded him the much coveted prize, to the effect that the prize had no relation at all to his theory of relativity, not so much intellectual cowardice as a legitimate and laudable expression of scientific caution. At times Bacon’s claim holds: when one gets excited over sweeping generalizations one may thereby have lost one’s head and thereby disqualified oneself as a detached scientific investigator. The demand for detachment is anyhow overrated. Can one use sheer commonsense, without delving into detailed, sophisticated philosophical deliberation on science and on its methods, in order to show that the attitude of the Swedish Academy to Einstein, just as the attitude of your professor in his classroom, is rooted in cowardice rather than in commendable scientific caution? I do think so.
Ah, you look most incredulous; and I do admire you for that. It is not possible, you say, to turn the tables on so many and so clever—much less with the aid of sheer commonsense. You are quite right to be incredulous. More often than not, people who come forth with claims such as mine—or with any intellectual claim, for that matter—prove disappointing, and often sooner rather than later. Yet, sometimes even the most incredible claims proved themselves quite viable. Galileo proved by sheer commonsense that the most commonsensical thinker of all times, “the master of those who know”, Aristotle himself, deviated from commonsense in his theory of gravity. What is more commonsense than to assume that ordinarily, freely falling bodies fall faster if they are heavier? Yet Galileo showed that this is inconsistent with commonsense. Take a ten-pound brick and a one-pound brick and tie them; the slower will impede the faster and the faster will drag the slower so that their combined speed will be some sort of average between their different independent speeds. Now glue them together and you have an eleven-pound body. Who can believe that tying or gluing the two bodies makes them fall at different speeds?
Ah, you will say, modern science begins with Galileo; no one among the people who run Academe today is an Aristotle. This is not to my taste: it is easy to say it long after the event. It took Galileo many years to give up his Aristotelianism and many more to discover the proof I have just mentioned. Moreover, the idea that heavier bodies fall faster does look convincing and its opposite does look puzzling. Envisage the following experiment of Boyle and ask yourself how commonsensical or intuitive it looks. He put a feather and a marble in a glass tube, emptied the tube of air and turned it upside down; the feather and the marble fell practically together. This result follows from Galileo’s argument. Even Boyle could not explain well enough this experimental result, or the validity of Galileo’s argument. Newton did it. He said, the heavier body is both more attracted and yet it is more resistant to the attraction of gravity than the lighter body does; so that the outcome, the acceleration of the fall of the heavier, is the same as that of the lighter. This is barely an explanation: why does resistance to gravity exactly balance gravitational pull? To put it in jargon, why does the inertial mass (resistance to force) equal the gravitational mass (the force of gravity or weight)? The first answer to this question appeared centuries later, in Einstein’s theory of general relativity. We are unjust to Galileo when we say that he merely refuted a muddled thinker: he raised a puzzle solved centuries later.
The reason Newton rejected the wave theory of light is that waves diffract—go around obstacles, as sound waves do when they enter the door of a room and fill it—whereas light always travels in straight lines. That light-waves diffract is obvious nowadays: hold a thread near your eye and you will not see it since light-waves circumvent it; indeed this is how a veil functions, looking transparent to the one who wears it but opaque to the one who observes it. Try to direct a beam of light into a black box through a hole and you will see a light diffraction-pattern. Moreover, if light is not wave-like, then it cannot disperse: look at dust particles dancing in the beam of sunlight and see them shine no matter from what angle you look at them. Yet Newton held the view that dust reflects sunlight rather than disperses it, so that it should not shine equally in all directions. It is thus easy to show that common experience contradicts Newton’s optics—only because much of the research done after him, and partly because of his challenge, this has become obvious. It was not half as obvious centuries ago. Commonsense progresses too. Remember: his magnificent Opticks of 1704 was for more than a century the leading text in experimental physics.
Or, do you think the mercantilist economists were foolish when they advocated government intervention in the economy in the form of tariffs and their likes? U. S. Senate minority leader Everett Dirksen advocate the same view in a similar fashion well after World War II, while expressing allegiance to the free market theory; yet Adam Smith argued for the free market by sheer commonsense that tariffs impede the working of the free market and thus impoverish the nation. If you are not an economist, try to show by commonsense that Smith was mistaken too. Do not most people think that if a country has more gold it is richer? Hume has shown by sheer commonsense that if everyone will have tomorrow twice as much gold (or government-printed money, for that matter) nothing will change except prices which will double. (This way we ignore goldsmiths, of course; their part in the economy is negligible.) Hume’s argument is strikingly commonsense; it is merely the application of the law of supply and demand. So is its criticism, which took a very long time to appear in the economic literature. If you are familiar with the literature even superficially, you may still come up with it—easily. If not, you may still be able to, but only if you are brilliant.
Or take Mendelian genetics. It appeared in the nineteenth century and replaced the ancient theory that Plato had endorsed and that horse-breeders still employ, namely that characteristics of offspring are mixtures of their parents’ characteristics. So how come two dark-eyed parents may have, though seldom, a child with light-colored eyes? Prejudiced they all were, say most historians of science. Mendel’s own ideas escaped notice until Thomas Hunt Morgan revived them. How incredible! Mendel’s own ideas were refuted by the existence of recessive characteristics detrimental to their bearers (like hemophilia); Morgan explained their persistence by modifying Mendelism—the so-called mutation theory. Even then, Mendelism scarcely is sufficiently innovative to be successfully applicable to cases of breeding racehorses or cows with high yield of milk. But once we look at the world in a Mendelian fashion, Mendelism becomes obvious and its difficulties obscure and all too easily we allow ourselves to call prejudiced the best biologists who lived before Mendel and who agreed with Plato!
Moreover, when an argument wins, many of the obstacles on its way may disappear, and then one is even more justly, yet still in error, surprised that the idea was not always obvious. When you take a fresh case, you can see it clearly. When bodies recede from each other, the light they receive from each other appears redder than it is; this, the red shift, understood as a Doppler effect, enables calculations of the speed with which heavenly bodies recede from us. The application of this technique to distant stars previously identified as radio-stars or quasars, made them look almost miraculous; the result was so quaint, it hit the popular press; its discoverer, Maarten Schmidt, had a write-up in an American national weekly. One would not believe this was such an excitement unless one knew why the scientific world took it for granted that such enormous red shifts were impossible (I cannot go into details here); once one knew these ideas, one could well appreciate why for once practically everybody took it for granted that it was worthless to explain the strange light as normal light with enormous red shifts. Once the explanation was worked out, once innumerable details easily seen (after much hard labor was invested) to follow from the explanation, it becomes harder and harder to dismiss it as crazy even though it poses serious problems and difficulties to be handled later on as best we know how.
The invention of a wild idea is incredibly hard: we are so blinkered; it is surprising we can think at all. It is even much harder to carry a wild idea through to amass all the parts of our background knowledge that conflict with it and offer a fresh look on them or to press hard against one small part of our background knowledge until it gives way and we may then hope to see the rest tumble down effortlessly. When a great thinker has done this Herculean job, we can come and see in retrospect how easily commonsensical it looks.
This point I have already made in the first section of this part. Usually, facts do not suggest explanations. When they do, we may just as well call the process inductive as by any other name. The suggestion is confusing though, as the process takes place within an intellectual framework, and so it is of progress within a narrowly blinkered field of vision; inductivism forbids this. It is when a fact defies any such explanation that the task of explaining it becomes Herculean. We then view with admiration those who have performed it—by starting new frameworks. Except that novelty “comes in degrees” all innovations are revolutionaries but there total revolutions, like the one that Descartes has attempted, are impossible.
My own Hercules, I keep telling you, is the philosopher Karl Popper, whose ideas are so wild that philosophers still refuse to take his statements as literal expressions of his views; they suggest that he talks in paradoxes in order to impress, to sound more original than he is. Popper’s idea is that the routine in science is the presentation of bold conjectures to explain given phenomena and the attempts at empirical refutation of these conjectures. This does not account for all the facts in the history of science; all too often scientists find experience impressive in the authoritarian way. Einstein’s theory of gravity, his general relativity, was not half as much appreciated before as after it was allegedly found to be correct; his special relativity was allegedly absolutely correct to everybody’s satisfaction, but before that happened the Swedish Academy took pain to stress its suspension of judgment concerning its value while acknowledging the value of his theory of the photoelectric effect by granting him a Nobel Prize on the ground that its enormous importance had been proven experimentally when the theory was proven to be correct. There is, I say, much more to Schlick’s criticism of Popper than meets the eye: he alluded to the contrast between Popper’s view of science as criticism and the view received in Vienna of his youth as a matter of course that science is success. Suppose someone who draws an obviously abstract painting says, I am drawing a portrait. This is barely imaginable. Yet it happened from the very inception of abstract painting. Popper said of researchers something wilder: they say they try to prove when they try to disprove! Artists who paint abstract work and call them portraits are aware of the oddity of their act. The Swedish academy is not.
To wind up my reply to your first objection that I think is a good objection. Utterly unobjectionable ideas are scarcely new. (This is not to deny them importance: when we need them, we praise them as eminently commonsense.) An objectionable idea, if dismissed on the strength of existing objections to it without further ado, will not make headway. Therefore, for a new idea to be capable of occasionally overcoming old ideas, we must examine afresh the objections to it and reassess them; if they withstand further examination, the new idea has to go. Data that corroborate Newton refute Einstein; to consider this as a final verdict is to endorse stagnation; to allow for progress we ignore all older corroborations and devise a crucial observation between the two; if Newton wins, we allow Einstein to try again, and if Einstein wins, the case is closed (unless someone reopens it, of course). It seems that here is a bias in favor of the new idea; it is not: most new ideas are stillborn (Hume). But if we apply statistics and say, since ninety-nine percent of newborns are stillborn let us neglect all newborns on our hands, then one hundred percent of newborns will be dead and progress will be utterly blocked. The bias then is in consideration of the fact that all progress is delicate and easy to kill. This is why it took so long for progress to be established, and this is why, when it is encouraged, it may grow fast. This is problematic: it dies when not expected and it dies when expected with assuredness. (This is why Popper argued against inductive logic all his life: philosophers who advocate it in support of progress achieve the opposite!)
This is but another way of saying that those who cling to old ideas reject their new replacements; they are no fools, yet no great lights either: they have good reasons for clinging, and they reasonably hold their reasons for it, yet the new ideas challenge the old ideas, and so it becomes less and less reasonable to hold the old ideas without re-examining and re-evaluating them in the light of the new ones. Pay attention: the new challenges the old.
We tell our young how terrible it was of our ancestors to oppose novelties like those of Schubert, Semmelweis, or Cezanne. They forget that unlike Schubert and unlike Cezanne, Semmelweis no less than declared that his colleagues the obstetricians were the carriers of childbirth fever—quite a fatal disease—who transferred and thus spread the infection. They forget that these physicians were the best and cleverest in the profession, they forget that practically all those who blame all their colleagues are almost invariably hopeless cranks. We have to admit that this is true and yet insist that they were rogues who put their own egos ahead of the interests of their patients. For, all that they had to do to render their intolerable conduct towards him reasonable was to try his proposal and wash their hands with soap as they moved from the mortuary to the maternity ward and check whether this reduces the death-rate in the maternity ward or not. They did not care: the ward in question was not for ordinary Viennese women but for ones too poor to pay their hospital bills.
It is amazing how much the ego of a leader may serve as an obstacle for progress. Physicist Freeman Dyson reports on his war experience, on his failure to reduce the ill effects of fire in the cockpit of warplanes:
All our advice to the commander in chief [went] through the chief of our section, who was a career civil servant. His guiding principle was to tell the commander in chief things that the commander in chief liked to hear… To push the idea of ripping out gun turrets, against the official mythology of the gallant gunner defending his crew mates …was not the kind of suggestion the commander in chief liked to hear.
We all want to absorb the latest progress in the arts and in the sciences as soon as they appear. We are we not able to do so, but we may try. We may try also to avoid messing things up by over-eagerness: we may pretend to appreciate every new canvas of pop art; we may listen to every cacophonic concert; we may listen to every crank (as Paul Feyerabend has declared we should). This will only muddle us. No, to absorb new ideas is to change one’s field of vision under their impact: there is a ceiling to the rate of growth of the arts and of the sciences.
I see that I exasperate you. Let me lend you my pen. Dear author, you say, I think honestly that you are crazy. No insult meant, of course, nothing personal; but you really do go much too far. You are writing a section on conservative cowardice and what I find you talking about is progressive cowardice. I thought you were going to tell me about resistance to novelties, that is something rather understandable and not entirely unknown (to say the least); instead of talking of the fears leading to resistance to novelties; moreover, you talk of fear of resisting novelties, and of the resultant cluttering of the mind with all sorts of novelties, good or bad, the muddle ensuing. Do not interrupt me. You are going to tell me that no insult is taken, and that you are doing it for the sake of contrast. I have never met you, but having read you thus far, I suppose I know you well enough to expect you to take kindly to this outburst of mine and condescendingly start a new digression on contrasts. So just shut-up for one minute more—yes, Mr. Author, no insult meant, only I wish to show you how crazy and impossible you really are. Who do you think you are, anyway. You are telling some of your own colleagues they are too slow; you are telling others they are too fast; you are comparing your allegedly humble self with one Semmelweis who also ordered his colleagues about, only, it has turned out, he was right. Very clever of you. That is too much, quite regardless of my regrettable ignorance of the rights and wrongs of the case of Semmelweis, although his explanation of childbirth-fever epidemics in hospital wards was essentially correct, as is well known. One moment more, Mr. Author; after all, I cannot possibly be as verbose as you are. Do you remember that you have promised to stick to common sense? What common sense is it to say that some thinkers are slow to catch up because they fear jumping on the wrong bandwagons, others too fast because they fear missing them? Is it not more reasonable and commonsensical to assume simply that fast thinking is fortunate?
Now, Mr. Reader, I hope you permit me to parry: after all, it is my book, and if you do not like it, do write one of your own. Do! It will make you feel good! Now, I am grateful for your outburst because I started feeling myself that I am becoming condescending, and there is nothing better than a brisk exercise to stress the equality of us all. I draw encouragement from the posthumous success of the great and brave Ignaz Semmelweis; since his battle was hard to win, both because he affronted doctors and because it is hard to rid hospitals of staphylococci and streptococci even in these days of high sanitation. All I hope for is that some people will be half as interested in the possibility that my proposals are beneficial as others were regarding those of Semmelweis, even though the outcome of my proposals will not be nearly as stupendous as his, of course. I do not risk my life by my proposal as he did. Under such conditions, I am willing to put up a fight.
Senseless cowardice and its ill effects are the topics of my present section; never mind how many people exactly suffer from them; you may immunize yourself against it and I hope I can help you in this task; cowardice, senseless or sensible, may be conservative and it may be progressive. Professors of medicine transmit to their students the most up-to-date material knowing that such material may become outdated when these students reach maturity; this is why journals for the specialist practitioners exist. They rest on the assumption that doctors differ from medical students only in their familiarity with the terminology. The fear of remaining behind, thus, fills the professors’ schedules with detailed results that leave no room for intellectual education that condemn doctors to lag behind or remaining apprentices forever. Lecturers in medicine know that they can teach a bit less up-to-date material to be able to teach a bit more theory and research-techniques. As a result, doctors will be able to read about later developments not in instruction sheets but in progress reports. Few try this out, since most of the literature (on research technique—mostly philosophical) is so regrettably too poor.
It is indeed here that I think I can help you with the use of the pure milk of commonsense (following Popper, of course): nothing is easier for a Western student to understand than that given theories come to solve problems but are found not sufficiently effective. This finding is criticism. It can be logical, empirical, or any other. Granting criticism central stage may hurt some egos—ignore these—and it renders learning much easier than stuffing student’s minds with significant data unexplained. Students who absorb the system in the critical mood can start reading progress reports and have much less dependence on instruction sheets. I hope you will become a scholar of this ilk. If you find my willingness to help you in this respect condescending, for which I am truly sorry, just ignore me. Even so, you need not blow your top—I am not forcing you to read all my diatribes. Of course, you do; otherwise, you could not possibly get angry with me. Tell me truthfully, do you get angry with every stupid, verbose, or eccentric author? Is anger in intellectual debates not a maladjustment to modes of argument?
It is not that my knowledge and dexterity are perfect, let me admit; it is that I had some training in both playing the game and training others to play it, and I hope that I can be of service to you in this very respect even though we have never had an occasion to meet and are not likely to. Nevertheless, you have little reason to listen to me in the first place, especially since I am prone to offering repeatedly some remarks that will sound to you somewhat outlandish. Here, you see, I, too, need some social approval to gain access to you! Lacking the means for approaching you, what I count on instead is your desperation or your being bewildered and in painful need of some help—as I have indicated in the preface to this work. This, I think, explains your anger; you wish to listen to me and profitably so—otherwise you would have stopped reading these pages long ago—but you feel that I make outrageous demands; you fear that I will lead you astray; you fear that your loyalty and dedication are under fire. You are right—at least on the last point. On one point you need not have any fear, however: your reaction is normal: there is nothing unusual in it. I make allowance for it. Do not worry about this matter: you have already too much to worry about.
Academic fear is nothing new—I have heard colleagues all over the world allude to it as a phenomenon that is as inevitable as the sense of insecurity generally is. People who hold prominent positions that shield them from the vulnerability to attacks from less-prominent colleagues may still feel haunted by fears of such attacks—from a baseless yet strong sense of insecurity. The literature is full of stories of people in high military, administrative, or business positions being devoured by insecurity and taking it out most unjustly on younger and weaker colleagues. When the fear of that kind manifests itself this way in Academe, it may (but need not) have one specific manifestation that is of some interest for our present discussion of your future. For, one of the manifestations of such insecurity is the hatred that academics show regularly toward critics. It is indeed the fear of criticism that I discuss. Ignore it! Colleagues who take it for granted that some established people may feel threatened by younger colleagues who mildly criticize them are right in considering such a reaction-pattern unjust but rooted in a sense of insecurity that can hardly be eradicated; they are mistaken, however, in considering all criticism threatening. What they usually view as the mark of unreasonable conduct is that the established person fears mild criticisms—criticisms offered by friend, or by not so powerful critics, such as younger colleagues. Most academics I know concur in viewing criticism as damaging; fighting the unreasonableness of this view may bring much improvement. For, once we agree that criticism is damaging and merely condemn the boss who insecurely fears mild criticism from young upstarts, that boss will speak of far-reaching consequences and demand loyalty the institution in question, to the university, to the profession, to the uniform. Fear of criticism is not necessarily the outcome of specific traumatic experience, during childhood or later; it usually is but a part of a social and educational pattern. It is the Chinese-mother-in-law syndrome again; in traditional Chinese society, a woman suffers all possible indignities from her mother-in-law, only to take revenge later on in her life on her daughter-in-law. Of course, if the sole cause of the persistence of this inhumane practice were psychological, i.e. the desire for revenge, as nineteenth century anthropologists explained this phenomenon, and then some kind-hearted mothers-in-law could stop the practice. Even some other factors that may be somewhat accidental could do that. The current accepted explanation of the phenomenon and of its persistence is in terms of loyalty: mother-in-law has to defend the clan’s interest against the whims of the new intruder; it was her task to be harsh to her. If the two women become friends, as is often the case, they would wisely conceal the fact.
When a child, an adolescent, even a young adult, offers some clever criticism of someone seniors, often such conduct elicits signs of delight from the surrounding company, the target of its criticism included. Of course, the criticism is answerable, even easily, but was it not brilliant, at least charming? Criticism may be unexpectedly brilliant, yet still not threatening to the criticized. Under such conditions, the criticized will make song and dance about the achievement of the critic. Even that has no assurance. Morris Raphael Cohen, one of the best philosophers of the early twentieth-century, had a brilliant student, Sidney Hook, later a well-known philosopher too. In his autobiography, Hook narrates that once his admired teacher gave him a book to review. The review he wrote was most laudatory, but with one minor criticism. He was apprehensive, he narrates, and so he consulted his teacher’s colleague and good friend. The colleague gave the needed reassurance and the review went to the printer. It infuriated Cohen. In the uneasy confrontation that followed, Hook referred to that good friend. He said, even so-and-so has approved of my review. I always knew he is a traitor, was the swift reply. I wish I could dismiss this anecdote as untypical.
The academic profession is in many respects much preferable to other professions in the community at large; it is less inhumane, less illiberal, less bigoted, less ignorant. It is just now administratively in poor shape, as I have tried to indicate before; but this is a transitory defect: until recently it could manage itself surprisingly well without much administrative training or concern, and so it was caught momentarily unaware in the dazzling expansion of its institutions imposed on it by the community at large.
Consider again the officer and gentleman who would have challenged to a duel any critic who came upon his way. Did he think he was the peak of perfection? No; even an officer and a gentleman is seldom that stupid. It is stupid enough to pretend perfection, because such appearances fool no one and because the cost of keeping up appearances is very high: the damage incurred is all too real. The answer to this is, usually, noblesse oblige: the gentleman is loyal and dedicated to his uniform, class, mistress. Move from the officer and gentleman to your physician. Ask yourself, does your physician admit criticism? Does your physician admit to you not having all the best available answers to your questions? The claim that physicians play God is still not easy to deny; they do not fool themselves; their schools of medicine it is that trains them to play God. Oh, there is an excuse for that: physicians are under the constant threat of a suit for malpractice. To repeat: every criticism is answerable, but the answers are often too poor to take seriously. At times they are taken dead seriously anyhow.
Envisage an officer and gentleman planning moves in headquarters on the eve of a crucial battle; imagine, further, that one of his tentatively planned moves is rather silly; imagine, furthermore, that one young and inexperienced but rather bright member of his staff spontaneously puts him on his guard. Most insulting. Under any normal circumstances, he would not easily forgive the intrusion: he solemnly addresses the young offender and says: if, God willing, we both survive tomorrow’s battle, I shall have to ask you to choose your seconds, young man! Not under the circumstances under consideration. Under these circumstances, our officer and gentleman is much too concerned with the truth of the matter to behave like a pompous ass. Granted, he is hardly moved by the love of truth for its own sake; even concern for the lives of his men need not be a strong point with him; his desire to win the morrow’s battle may be rooted in no more than a human desire to win a medal, even in stupid motives, such as what he calls his honor. At that moment his desire makes him pay attention to the truth, and this suffices for him to be reasonable despite the traditional demand to act foolishly: he knows that the criticism helps him win the battle and he is grateful (Bernard Shaw, Too True To Be Good, 1932). This you may remember, is Kepler’s point: the desire to find the truth already fosters intellectual courage. The loyalty that prevents it, then, is intellectual cowardice. The desirability of criticism (and hence the divided loyalty) is dimly recognized in many popular attitudes which, at the same time, illustrate the folly of the general hostility to criticism. Folk-sayings (Proverbs 27:10), folk-tales and higher forms of literature (King Lear) illustrate the obvious: it is sometimes the duty of an honest friend to offer severe criticism; one should take it seriously. Other common sayings recommend the suppression of criticism of a friend in the name of loyalty, as well as the suppression of criticism of young artists and scientists in the name of encouragement and good will.
What makes a person a coward may be an extremely low degree of ability to control or overcome fear, an extremely high ability to experience a strong sense of fear, a sense of insecurity, or sensing fear with no special justification for it, such as when an adult person fears the dark. The fear of the dark is here incidental to the fear itself. This is the discovery of Freud: he discovered Angst: fear. Because it is widely assumed that fears must have objects, psychologists call fear without objects by its German synonym: angst. Because those who suffer from angst hold that fear must have an object, they project (this is another technical term of Freud) it to almost any object around. Hence, says Freud, it is useless to convince those who suffer angst that what they fear is harmless, since admitting this they will not be rid of their angst but merely project it into some other object. If we are to believe Kafka (Amerika, 1927) or Truman Capote (Breakfast at Tiffany’s, 1958), even without knowledge of Freud, some people know that they have fears with no specific object or with some mysterious object (still more in tune with Freud, who held that every angst is the residue of the same traumatic but reasonable fear) and thus be free of the need to project it. Is angst the same as the sense of insecurity, that is to say the ability to be frightened? Do all who fear the dark simply project angst? I think, as usual, Freud saw something very interesting and failed to notice that the phenomenon is not half as universal.
I hope so. For, I am discussing here certain shadows that most academics fear unreasonably, with the hope that this might be useful to those in whom they inculcate this fear. I suppose that the fear develops not by traumatic experiences but by a simple mechanism that I wish to expose in the hope to bring it soon to full disuse. I also hope that my discussion will explain why ideological academic bores fear intellectual excitement. After all, my chief purpose is to argue that intellectual success is the ability to excite audiences intellectually, rather than to be right or scientific or politically correct.
Here we differentiate the institutional from the psychological. Fear of criticism is fear of rejection; it is unreasonable, but often it is a Freudian projection. As it is psychological, it need not be clever and it need not be commonsense. A major instrument of its maintenance is the alleged standard cure for it: the effort to replace insecurity with self-assuredness. All one needs for self-assuredness is poor imagination. It is all too easy to imagine all sorts of obstacles to any plans anyone has. Yet the idea that fear and doubt stultify is so common that even the assertion of the Bible to the contrary, Proverbs, 28:14, does not help. The King James translation of that text rightly says, “Happy is the man that feareth always.” The standard translation of the text however is inaccurate: “Blessed is the one who fears the Lord always”, or, “Blessed are those who fear to do wrong”. We should always fear for what we value, of course, especially the well-being of our nearest and dearest. The important is to prevent the fear from stultifying. Otherwise, we will end up in total paralysis. Just look at Academe.
Having no wish to speak against either cowardice or humility, my aim is to immunize you against them: they comprise sheer ballast, and you don’t need no ballast.
The best way to make you permit the Establishment to dump ballast on you is the story of their success and of their dependence on them. Now I do not know them and I cannot comment on the marvelous things that they do in the cause of the advancement of learning. Much less can I comment on their claim that unless you are a humble coward you stand in their way. Still, some information is available about their predecessors, about the scientific tradition since the Copernican or the scientific revolution (the middle of the sixteenth century or the seventeenth). Their predecessors always defended scientific unanimity, and thereby they always impeded the growth of science. It is rebels like you—this is my fervent hope—that might, just might, save the day. Were members of the Establishment today better than their predecessors they wound be encouraging you. So there.
Medieval scholarship centered on reconciliation. The paradigm was the great Al-Farabi (died 950). His influential book was, Harmonization of Plato and Aristotle. In this vain in the thirteenth century St. Thomas Aquinas harmonized Archimedes with Aristotle. Copernicus altered this. In the preface to his great book he said, since the ancients disagreed, they are no authorities. What he and his disciples found in the stars was the autonomy of the individual.
Francesco Buonamici was professor in Pisa of the next generation. Astronomically he was no Copernican: he was a firm advocate of Aristotle’s views. Methodologically he was: he expressed disagreements openly, even with St. Thomas. The historically most important idea of his was his disagreement with St. Thomas about Archimedes and Aristotle: he said they were in disagreement. Galileo expressed gratitude to him for this very point. It made Galileo a Copernican. He considered it sufficiently important to write to Kepler about it as a new argument in favor of Copernican astronomy.
Galileo did not dare publish his new arguments in support of the Copernican hypothesis: although he was not too brave, one cannot call him a coward, since the opponent he feared was sufficiently vicious to burn his peer Giordano Bruno at the stake. Moreover, when Galileo had enough evidence and he did advocate the Copernican hypothesis, his evidence was of no avail and he had to gather courage to face the music. This he did. He had the best trump cards, but the society to which he belonged and with which he identified (he was an earnest and sincere Roman Catholic) did not have institutions to safeguard fair play. The Church had to humble him because he won so stupendously that it looked as if he had mocked at the Pope. The Pope had no choice: he had to insist that Galileo should suffer public humiliation. This was tragic since the two were friends; tradition regrettably ignores this and instead it presents the Galileo affair as an expression of an inevitable conflict between science and faith declaring him a martyr of science. He was not; Bruno was.
In modern scientific society, the fairytale goes, the play is always fair, provided you wait patiently for your turn, and—like magic—your turn always comes at the right time. You cannot lose, only win. Hence, if you lose you must have played foul. It is very similar to having had malaria in the British armed forces stationed in the tropics: until late, the malaria of an enlisted man was sufficient evidence that he had failed to take his quinine ration; the court martial found him guilty on the strength of that evidence. When he protested that he had taken his quinine faithfully, his commanders dismissed his protestations as contrary to scientific evidence. The medical profession, not the military, bowed humbly to the facts, admitted error, and conceded that quinine is not a fully satisfactory preventive of malaria.
Thus, all you need is to wait for sufficient evidence. That this holds in a modern scientific society is hardly credible. Its defenders explain cogently: it may take decades before you get your desert; you may be dead by then, and this is regrettable, but. And so, you may be right but wait; you receive the best training and it is good enough, and if you work hard, you will succeed. I may be dead by then; you may be dead by then; but the innumerable minute contributions of which it consists—including mine, and hopefully yours; and posterity will know and appreciate!
You can read this in many a preface. Consider authors who wait with good ideas for three decades or so until they had enough evidence in their favor. You have to admire the strength of character of such authors and realize that instead of rushing to the print in order to make names for themselves and gain a promotion and establish priority and all that jazz they were humbly working by the book—they were piously following the rules of scientific method—of induction. Accounts of minute and dreary facts, the results of simple, insignificant experiments and observations, uncluttered with their reporters’ personal opinions, are admired for self-discipline and humility—even by such imaginative people as the great poet Johann Wolfgang von Goethe. I do not know if it was his tremendous urge to be a scientist that made him follow the myth that many researchers spouted but hardly any followed; in any case, his book on colors is embarrassing.
Hans Christian Ørsted, I have told you, published his speculations on electromagnetism and even got into some small trouble. All this changed overnight when he made his discovery in 1820: he sprang into the greatest fame. A decade later, in 1831, in the Brighton meeting of the newly founded British Association for the Advancement of Science, he was present as a guest of honor. Sir John Herschel, the famous astronomer and author of the Preliminary Discourse to the Study of Natural Philosophy, 1830, delivered a eulogy: look at Ørsted, he said, he made the great discovery because he did not jump to conclusions and humbly and patiently sought for the facts rather than speculate. Now in my mind’s eye I see Ørsted blushing and wishing to bury his head in the ground for shame for the great astronomer who was demeaning himself.
Herschel did not have to know the facts. He had a clear idea as to how facts must have happened. He learned it from Sir Francis Bacon. (Let them read Sir Francis Bacon, he advised young students; they will find him tough meat, but he will sharpen their teeth!) So let us take one further step back and look at the ideas of Sir Francis Bacon; I should have humbly started right in the beginning—with the Greeks and Romans if not with the Babylonians and the Egyptians, but.
Francis Bacon envisaged the new learning and dreamt of industrial revolutions and of technocracies, of secular universities and of meritocracies; unlike his contemporary Shakespeare, he had one vision in his life, which hit him in his adolescence when he was preparing a piece for a party in his London Law School, a vision that nagged him and grew on him all his life till it reached an unbearable magnitude. Bacon was an eighteenth century idol: as Paul Hazard says (European Thought in the 18th Century, 1946), public opinion then considered him the only thinker comparable to Newton; he was “reason incarnate”. In the early nineteenth century, he became a possible target for criticism, and soon it became a favorite nineteenth century sport to kick him. Since the title of my next section but one is “a favorite sport: kicking an opponent while he is lying down” I shall postpone this point. In the nineteenth-century, Bacon had two important defenders: Robert Leslie Ellis and his collaborator James Spedding. Ellis was a Cambridge don, an editor of a mathematical journal and a pioneer in studies of bees. In his spare time, he decided to study the reason for Bacon’s fame since he was, in addition, a student of medieval and Renaissance literature of enormous erudition, as well as a Greek and Latin scholar.
>The longer Ellis studied Bacon’s works, the more profoundly it puzzled him: what made Bacon so very famous? First Ellis discovered the great extent of Bacon’s plagiarisms: almost all of Bacon’s scientific writings he dismissed as plagiarized. He plagiarized some botanical details from Livy not knowing that these may be peculiar to Italy but disagree with the more northern climate of his own country. Bacon’s scholasticism is very much contrary to what Ellis expected to find. He found embarrassing Bacon’s pedantic style and his bombast bother with classifications and distinctions that cut no ice. Ellis accepted as a matter of course Bacon’s doctrine of prejudice—his theory that speculations and hypotheses make people who hold them blind to contrary facts—but he intensely disliked Bacon’s preoccupation with the classification of prejudices to four or to five categories. This, too, is unimportant, though the style of Bacon’s Essays has contributed a lot—for better and for worse—to the development of English prose: his Essays appeared in innumerable editions and were compulsory reading in many classrooms and even in some colleges. This is partly because Bacon was the first English essayist, and with Montaigne one of the first European essayists. His essays are even crisp and modern, and they represent, together with his The New Atlantis and a poem or two, the best of Bacon’s style. He himself wrote to a friend that posterity would remember him for his essays. Justus von Liebig, one of his chief and most powerful nineteenth-century debunkers, concurs with this judgment. Yet Liebig was mistaken. Even Bacon’s fame as an essayist is largely derivative to his fame as a methodologist, as “reason incarnated”.
Ellis’s greatest puzzle—it is serious indeed—concerns methodology proper. It is a strange fact, that Ellis was the first to notice: Bacon says practically nothing about the proper method of science, to wit the inductive method. In his The New Atlantis, the president of Solomon’s House is at the point of telling Bacon what that method is, when the book ends abruptly. Obviously, adds Ellis rather bitterly, because Bacon could not say much on that. Similarly for Bacon’s unfinished Valerius Terminus. The case of his monumental Novum Organum is even more bewildering. Book I of that work is propaganda and preparatory. In Book II Bacon pulls up his sleeves and starts working in earnest, but instead of giving us the general idea he elaborates an example. When, in that example, he finally comes to the point of developing a theory out of the accumulated facts (such as that moonlight has a cooling effect and other medieval superstitions among them) he chickens out and makes a hypothesis instead—thereby, notes Ellis, betraying one of his own central and most forcefully stressed rules, namely that one should never speculate or hypothesize, lest one become prejudiced and blinded to the truth by one’s own ideas.
Ellis took for granted that scientific method is the method of generalizing from the facts—such as Kepler’s generalization from observed positions of Mars to all positions of Mars, namely to the orbit of Mars, and from the orbit of Mars to the orbits of all planets (namely that they are all elliptic)—and he knew enough of Bacon’s work to be unable to ignore the fact that Bacon had condemned this very method off hand—as childish and dangerous (since it does not leads to certitude).
Bacon’s seemingly pedantic and meticulous scholarship particularly appalled Ellis when he contrasted it with Bacon’s casualness and lack of all self-criticism. Ellis was at a loss to resolve these inconsistencies. What troubled him most was his failure to make sense of Bacon’s view on the effort that the method of induction requires: it was impossible for him to decide whether Bacon’s claim that the inductive method is extremely easy or Bacon’s claim that it is extremely difficult, is more truthfully representative of Bacon’s system.
Ellis left this point unresolved; which he had to do often in view of the frequency with which Bacon exhibited so thoroughly uncritical an attitude. Possibly Bacon had borrowed from Petrus Ramus even his initial claim that as Aristotle is unreliable, a new methodology is necessary. This was very much in the spirit of the age. As Harry Wolfson has later illustrated, it was a favorable sport, shared even by Kepler, to assert views that Aristotle had criticized on no better ground that he had criticized them. Characteristically, Bacon seldom refers to Petrus Ramus, and then always critically and incomprehensibly.
A special insight into Bacon’s casualness is afforded by the fact that he published one book—The Advancement of Learning—twice, in English and then in Latin. Ellis compared the two texts carefully. In the English text, for example, Bacon argues for his doctrine that discoveries are accidental rather than the confirmation of clever hypotheses. For example, you would expect Prometheus to have struck two pieces of flint by accident, rather than by design. For, the relative abundance of flint in Greece explains why the Greek Prometheus struck flints, whereas the West Indian Prometheus rubbed sticks together. In the Latin edition the West Indian Prometheus is silently dropped—evidently since he is hardly to be expected to have rubbed two sticks so specifically and for so long by accident rather than by design (i.e., according to a clever hypothesis). Moreover, we know that the Greek Prometheus too has rubbed sticks and not pieces of flint: throughout Antiquity, Greek sacred fires were lit the hard way—by rubbing sticks, not by striking flint.
Ellis tried hard to say a good word for Bacon. He was attracted to Bacon by forces he could not describe. He said, he could not explain to his satisfaction why Bacon had claimed utter originality so persistently, but he held to the last that there was some kernel of truth in this claim. All this is not too significant; no matter how common or uncommon the claim for originality was in the Renaissance or how insistently Bacon did or did not make it, it was his hold on Ellis that made the latter look for an original contribution in the former’s works. He found it in Bacon’s myth of the birth of Cupid, the description of the intuitionist theory of the birth of an idea that compares enlightenment with the break of dawn (the birth of Cupid) into the darkest part of the night. In 1933 C. W. Lemmi has shown in his Classical Deities in Bacon that Bacon had lifted that chapter from Natale Comes, an obscure alchemist-Kabbalist.
Bacon violated the canons of Baconian methodology. He was a visionary of science who saw in vision the greatest danger to science! I am most fit for science, said Bacon, because I am of a patient disposition and a nimble mind; and I am particularly good at humility. Were he not busy with affairs of state, he said, he could discover all of Nature’s secret in a few years, perhaps with the aid of a few research assistants, since the labors of research require so much patient collection of endless minute observations of plain facts.
James Spedding was the collaborator of Ellis and the editor of his posthumous work. (Ellis died age 42; on his deathbed he granted Spedding a free hand.) He considered Bacon’s writings serious and sincere. So were all of Bacon’s predecessors and contemporaries who all shared the belief in the philosophers’ stone. Were they all charlatans?
If medieval and Renaissance hermetic thinkers were not charlatans, how did they fool themselves? If they had the secret of redemption, why did they fail to use it? What did they tell themselves? Once you have the answer to this question, I think you will be surprised to see how medieval we all still are. The self-deception of the obsessive gambler (Madame Pique), money-grabber (You Cannot Take It with You), power-thirsty (All the King’s Men), academic-titles collector (Wild Strawberries), they are all akin, and they were all exposed systematically and repeatedly with no progress. The model that fits them all is that of the frustrated prospector: if I would only achieve a little, it would offer me fame and fortune and more chances and thus real happiness. The failure of the frustrated prospector imposes the choice between two hypotheses about what the achievement of success requires. One is the hypothesis that what is necessary is still more of the same, and the hypothesis that it is something different. Invariably, the prospector prefers the former option. They knew that the chances of success were very small, but they went for the jackpot. Obsessive gambler Fyodor Dostoevsky made it a philosophy of life: he said that those who aim at the possible achieve it and more, but they scarcely count. The insatiable reach for the stars from the very start (The Idiot, 1869). Thus, they know that their greed is but a step in a never-ending chain. Dostoevsky forgot altogether the ambitious adventurer who tries the impossible and gets it or perishes or both; with all his psychological insight he could never envisage a Martin Eden or a Great Gatsby; not even a Captain Ahab—not to mention real-life Spinoza. The reason is that he viewed all ambition as ambivalence: no ambitious person could stand the sight of his ambition fulfilled, Dostoevsky (and Freud) insisted, in disregard for a few familiar facts.
Ambivalence is neurotic, Freud has observed: it rests on defense mechanisms, his daughter added. He postulated one cardinal, if not one and only one, defense mechanism: it hurts to think about your neurosis so you drown your doubts in work, in trying ever so hard to achieve this or that, the unattainable object of your ambivalence—and so the circle closes. Anna Freud postulated more defense mechanisms, and indeed invented the very term. Psychologists still overlook the purely intellectual defense mechanism that culture imposes; Popper called it reinforced dogmatism. Dogmatism may reinforce itself by showing that the very doubt concerning it proves it right. For example, doubts about religion are the works of the devil: they thus call for repentance. This example suffices in principle, but it is rather crude. Better examples are the Marxist explanation of disagreement with them as due to class-prejudice and Freud’s explanation of resistance to his views as due to repression. Here is one more.
The mystic redeemer who possesses the formula for the salvation of the world has to qualify by personal purification. Ritual baths and an impeccable way of life comprise only the first step in that direction. Concentration on aim in hand is equally important: one has to feel personally all the agony of the surrounding world to stay fast on the sacred task. But then, seeing the greatness of the task brings about a different risk: the risk of falling a victim to pride, to hubris. Thus, the nearer at hand success is the more remote it also becomes. The idea that pride is the greatest obstacle Bacon found in the work of Natale Comes and copied when writing his book on ancient wisdom. For Bacon, speculations were the signs of pride, of lust for power—for intellectual authority: it was the original sin and the cause of the Fall, the Fall that was the forgetting if ancient knowledge. So says the final passage of Bacon’s magnum opus, his celebrated, epoch-making Novum Organum (1620). The publication of a false theory proves that its author had not humbly collected sufficiently many facts of nature. The moral of the story is that all you need in order to avoid error is patience and humility and hard work; whenever troubled, drown your trouble in the laboratory; you will then have your trump card.
The mythological nature of Baconian methodology need not be disconcerting. It is very much in the spirit of the Renaissance, when they lit candles in front of statues of Socrates. The ideal of the Renaissance was the same as the ideal of the medieval mystic—the return to the Golden Age. Renaissance thinkers learned to do something about it, ever since Brunelleschi succeeded in using an ancient method to complete the erection of the dome of the Santa Maria del Fiore cathedral in Florence (1436). Renaissance architecture admired and tried to apply the Pythagorean doctrine of proportions (Wittkower). Renaissance painters admired and tried to apply Vitruvian Man and the Kabbalistic doctrine of man the microcosm (Sir Kenneth Clarke). Copernicus viewed the sun as the seat of the divinity and Kepler viewed it as God the Father (E. A. Burtt, The Metaphysical Foundations of Modern Physical Science: A Historical and Critical Essay, 1924). Number mysticism and word roulettes led to Leibniz’ theory of the universal language which so influenced Frege and Russell, the fathers of modern logic, no less.
The proper attitude is, indeed, to be sufficiently concerned with one’s work to make quite marginal the point of personal profit—through making a name for oneself or otherwise. Now this is not necessarily humility: one can easily ignore one’s own personal interest, especially if the job is pleasant and society affluent; or, one may have to employ a proper appraisal of one’s abilities rather than somewhat piously and not so candidly declare oneself feeble. By the accepted doctrines of humility, Einstein could not possibly be humble except through much self-deception about the value of his contributions to science. As it happens, he was very candid yet also very humble. The humility becoming to a human as a human is sufficient, he explained. Those who preach an extra dose of humility, Christian or Baconian, thereby exhibit a lack of appreciation of human fallibility. Thereby they show a tinge of hubris. The truth in Bacon’s doctrine of humility then is such that it would have surprised him. Yet it was not hubris: it was his expectation from science. When Newton compared himself to a child picking a pebble on the seashore, he expressed a view of science very different from the one he projected in his Principia. When Einstein said the best scientific theory is but a way station, it was a remark better integrated into his worldview. Nevertheless, some of Newton’s awe-struck attitude to science is very appealing and may have been lost in Einstein’s attitude that is saner.
Bernard Shaw said, the tragedy of hypochondriacs is that they are invariably right, since no perfect health is possible. Hence, the hypochondriacs confuse health with perfect health. Thus, all you have to do to become a neurotic is to apply to yourself some unreasonably high standard; in the case of the hypochondriac, the standard concerns health; in Bacon’s case, it was the open-mindedness or non-commitment. In Newton’s case, it is the immense, incredible success of his theory and its being the very last word. Quite generally, radicalism promises the sky on one small condition that turns out to be not too small, impossible even.
Now, suppose you do conform to any unreasonably high standard. The result is that you are an unusual person—a genius of sorts. Hence, the kernel of truth in the thesis of Schopenhauer concerning the resemblance between the insane and the ingenious. It sounds exciting, but unfolded it turns out to be trite.
Faraday complied with Bacon’s standards in an exemplary fashion. From 1831 to 1839, he published a series of papers in the perfect inductive style, hinting at his ideas by reporting experiments that illustrated them and piling evidence refuting the views of his opponents without using overtly argumentative or critical locutions. In 1839, he published a sketch of a small idea of his—concerning electric currents as cases of collapse of electrostatic fields. He was totally misunderstood. He had a nervous breakdown; he had severe attacks of amnesia and withdrawal symptoms and they worsened in time. In 1841 he recovered and published two more papers in a very austere Baconian style—one of them on electric fields. Peers ignored them. He then came forth with some of his most daring speculative papers, declaring matter to be not billiard-ball atoms as was widely accepted at the time but mere aspects or characteristics of electromagnetic fields of force (quasi-singular points in fields). His ideas still found no interest. His amnesia worsened. In the fifties, he writes an enthusiastic letter to Tyndall, saying two young authors have referred to his ideas and this has revived his memory and rekindled his desire to publish. Alas, it was too late. To the end of his career, his experiments won increasing praise and his ideas still met with no comment. He saw his career as nearly finished and worked on his collected papers. An anonymous review in the Philosophical Magazine was rather favorable—I suppose its author was David Brewster whose life of Newton was the first anti-Baconian text of any weight. Faraday’s anonymous reviewer not only praised his experimental work but also pointed out that his speculations were a real help to him. Experts agreed then that perhaps a Faraday may employ wild speculations, but. My views, Faraday insisted, are at the very least better than the alternative. If I were a member of the majority school, he added, at least I would have replaced the billiard-ball atom by a solar-system atom. Nobody listened. His solar-system atom was lost. (Bohr reinvented it independently, half-a-century later, with powerful details that made peers gasp.) Perhaps I suffer from hubris, Faraday mused; in his pain, he once wrote in his diary IT IS ALL BUT A DREAM, and then again insisted he must be right. I have discussed elsewhere in this volume the difference between a normal healthy need for recognition and a regressive neurotic one. Faraday knew nothing of all this. Was he a speculator who had lost his judgment in amour-propre or was he so vastly superior to his contemporaries even though he knew no mathematics? He tried to go back to work. He published one further paper. Another paper a brilliant young mathematical physicist rejected as too speculative. He returned to more experimental work, trying to discover the effect that Pieter Zeeman discovered independently almost half a century later, and trying to discover the asymmetry in diode tubes that was discovered by his followers a generation or two later. His memory was failing and finally his perseverance failed. He died senile in 1867 at the age of seventy-six. His biographers describe him as a Cinderella; in my book on him (Faraday as a Natural Philosopher, 1971), I present him as an ugly duckling who could not take so much of the contrast between the two views of his character—that he was he a crank or that he was a genius. As I have told you, his friend and sole pupil John Tyndall saw his speculations as cranky, but he insisted that the cranky streak was forgivable. In the eighteen-seventies, Hermann Helmholtz concurred; in the eighteen-eighties, he played a different tune: by then the duckling became a swan.
One of Faraday’s followers of the late nineteenth-century was George Francis Fitzgerald of the Lorentz-Fitzgerald contraction that adumbrated special relativity. In one book-review in the Philosophical Magazine Fitzgerald condemns some German physics book as cluttered with facts, and he blames the German national character as inductivist. Hegel and Marx had previously derided the English national character this way. This kind of nationalism is crazier and worse than inductivism.
I should not conceal from you that critics have railed my book on Faraday on three grounds. It is inaccurate; it distorts facts in order to present Faraday as a precursor to my heroes Einstein, Schrödinger and Popper; and it says nothing new. I also received laudatory comments on it, from admired teachers (philosopher Karl Popper and mathematician Max Schiffer) and from peers (mostly in personal communications). Still, I did envy authors who managed to receive lavish reviews. Their status as established and popular saves them the time and effort that lesser mortals must invest in order to find publishers, to have their books reviewed, except that their investment in becoming established is bigger.
The common history of science narrative as calm and free of upheavals is sham. Let me admit, however, that humility is the natural place for sham: it is hard for serious intellectuals to play down their achievements. This an American one-liner sums up nicely: when it comes to humility we are tops. The great Maimonides said, one can never be sufficiently humble. Now, as his Codex is a great achievement, how could he be humble? This question must have troubled him, as he offered an answer to it—in a private letter. He said, he had written it for himself, since he had a bad memory. Not true. Hence, even the great and sincere thinker that he was spoke of humility inaptly. Not all great thinkers were humble. Einstein was. Facing the same question as Maimonides did, he said, the humility that becomes humans as such should suffice. How nice!
You already know the contents of the present section; so it is but an exercise in pyrotechnics. Take any conceivable form of academic cowardice, find any old reason from the stock of vulgar prejudice—yes, it will turn out almost invariably to be Baconian or at least quasi-Baconian, you do not have to worry about that—to show that the a display of cowardice is nothing but a case of academic responsibility and then look for a case history of a cogent historical example. By now I trust you well enough to be able to perform this exercise—perhaps you will need to consult the literature, but I think you have the general drift. To make it easy on you, let me outline some examples and leave the rest to you.
To start with our last point, notice how irresponsible it is to publish an unproven speculation and a half-baked idea, not to say information not thoroughly checked. To continue with our example, take nationalism and show how very unscientific and unfortunate it is, at least in certain manifestations—the harmful ones, of course. You will find that certain great scientists were nationalists—even Max Planck—and that even famous physicists like Nobel laureates Johannes Stark and Philipp Lenard were authentic, card-carrying members of the Nazi party who denounced the theory of relativity a Jewish, and that Heisenberg worked for the Nazi regime in research aimed at the arming of the Nazi forces with nuclear weapons. Do not let this discourage you: it proves no more than that even a scientist may lapse to unscientific irresponsibility. If you need examples to show how unscientific this irresponsibility is, consult Sir Gavin de Beer’s The Sciences were Never at War of 1960: do not let your inability to read much of such stuff discourage you; you need not study the book systematically, only skim through and pick a morsel here and there. Do not feel too guilty about it: they all do it, and the nobility of their cause (which is the greater glory of the Commonwealth of Learning) absolves them from the sense of guilt.
If you think all this too cheap, take different cases. Speculations about homeopathy, for example—the idea that a drug that causes healthy people to simulate symptoms of a given illness is the proper cure for that illness. This idea led to lots of useless and even harmful medication, research, and their paraphernalia. It will take a long time to get rid of it.
You may think it safe to endorse criticism of premature ideas rather than discuss these ideas. Not so: this may be dangerous too. True, it is the publication of ideas too early that leads to unhealthy criticism, but such criticism is also condemnable. Thus, Thomas Young’s wave theory of light was essentially on the right track, of course, but he published it too soon, his own fans admitted, and thus allowed for powerful opposition to build up; when he corrected his error (light-waves are transversal, not longitudinal) opposition did not die out instantaneously—he should have waited a little longer and done his homework a little better before his first publication.
The previous paragraph is so nasty; perhaps I should omit it even though it is obviously a mere echo of the worst in the traditional inductivist arsenal.
As to false factual information, the erroneous statistics concerning the relatively low incidence of the cancer of the uterus among Jewish women first led to all sorts of food-fads allegedly emulating and improving upon Jewish food taboos, and then led to the defense of Jewish sex-taboos—a heap of still extant pseudo-science that rest on a lapse of a research worker in cancer long ago.
The worst is the pseudo-scientific evidence for the racial inferiority of some people that rests on obviously foul statistics and therefore known by the technical name—of the bell-shape—of a common statistical curve. The evidence serves those who wish to reduce the public-investment in the educationally discriminated against by the claim that spending money on them is waste. Karl Popper has suggested that one should not argue against this silly evidence but use it in the opposite direction: the less-gifted who have to work hard to achieve what the more-gifted acquire with ease deserve more help and so the argument should serve to raise the investment in the education of the educationally discriminated against. Were this line of argument generally endorsed, those who prove the educational inferiority of the educationally discriminated against would prove empirically the opposite.
Here are a few taboos still widespread in the commonwealth of Learning. Do not publish controversial material! Do not embark on large-scale projects! Do not try to dazzle your students; just provide them with the facts, techniques, and established theories that later in life they will learn to appreciate! Do not spend too much time guiding research students—doctoral or post-doctoral—as you will thereby lead them to prejudice! Just see to it that they should work hard! If they work on your project, do not elaborate either, but issue brief and operational instructions! If these will not do, leave them to their devices but keep yourself informed of their activities!
I have little to say about the previous paragraph by the way of detailed examples of academic careers that grow in parasitic fashions. Examples might be piquant, but not sufficiently important: if you reread the last paragraph carefully, you will see why rumor abounds about plagiarism from underlings, as rumor about empire-building does, and why it is so hard to pin down such rumors and find how much of their contents is true. You will soon learn to ignore these rumors, be they true or false. Only examinations of very exceptional cases can possibly be profitable, and only after much time lapses. Leave it to historians of science.
It is too early to sum-up any historical experience on the matter. The idea of securing priority originated with Bacon and instituted by Boyle. Hence, talk about priorities prior to that period is silly, just as talk about plagiarism, whether of Bacon or of Descartes. Due to the idea of Bacon and Boyle about amateur science, scientific collaboration developed only in the twentieth century. Some contact was always inevitable, and already Hooke had priority quarrels with both Newton and Oldenburg. This, and Newton’s touchiness, entrenched the tradition of the loner. The famous mathematician Jacques Hadamard said, he regularly dropped working on a research-project once he heard that someone else was working on it. The first case of something like a serious collaboration in scientific research was between Thomas Young and Augustin-Jean Fresnel early in the nineteenth century and they had a tiff concerning their relative share in the revival of the wave theory of light. The first research worker who had an apprentice proper was Davy. His apprentice Faraday had long ceased to be an apprentice—he was in his thirties—when he liquefied chlorine. Davy was jealous and insisted he had helped Faraday in the design. To get out of that indelicacy, Faraday searched for an earlier discovery of the same fact—and found something like it.
Attitudes towards the administration of the university do not easily lend themselves to Baconian interpretation, but it is easy to defend any of them in the name of responsibility to one’s community, just as it is easy to do the opposite. Indeed, it is very easy to advocate submission to the administration and opposition to it, participation in administrative work and abstention from it. This is a reasonable measure of the lack of seriousness of current discussions of these matters. This holds for other cases too. In the name of responsibility you may both refrain from controversy and viciously attack as phonies some of your more irritating opponents; devote more time or less time to teaching. In one college, I heard many professors say that universities are primarily research institutes when teaching was agenda, and that teaching is a supreme responsibility when research was agenda. This section bores me too. So let us move on.
No, no. Not to the next section but to an entirely fresh digression; concerning responsible and irresponsible modes of arguing about responsibility. I said it is easy to defend in the name of responsibility opposite attitudes towards the university administration. You can also defend the publication of half-baked ideas in the name of responsibility: you do not want to conceal from anyone, especially those who may benefit from your speculations. Do not be alarmed. Sir John Herschel’s already mentioned Preliminary Discourse to the Study of Natural Philosophy (1831) shows how one can do this with ease. The book is chiefly a fanatic attack on speculations. Its tenor is the idea that failure is the result of speculations that are thus irresponsible. For example, the poor adventurer who tried to build a submarine boat and went down never to be heard from again. (Do you remember the ill-fated US submarine Thresher? Do you know that the first systematic glider, Otto Lilienthal, died in a crash-landing?) Herschel had other examples of laudable success and of damnable failure; what have these to do with hypotheses, however? Throughout the book, he opposes them, except for one place where he hotly defends them—against whom, I wonder—showing how important some hypotheses were; it is irresponsible to condemn every hypothesis, he concluded. Goethe had said, and Herschel quotes him (without reference, though), that those who confuse hypotheses with science mistakes the pile for the building. It is thus irresponsible to suppress all hypotheses and it is irresponsible to confuse hypotheses with proven theory. (Followers of Newtonian optics did that and as Herschel was the leading advocate of the new wave theory of light, he accused them as a way to exonerate Newton.) This comes in a book that repeatedly condemns all hypotheses.
I have digressed on my digression. So let me start afresh. It is not very responsible to argue from responsibility for some action when it is equally easy to argue from responsibility to the contrary. Can one argue on responsibility responsibly? Does it not follow that we should not argue from responsibility for an action only if it is impossible to argue from responsibility to the contrary? If so, further, does this not imply that the contrary action is irresponsible? Is this not, hence, to say that when arguing from responsibility our argument should be clinching? If you have caught me here in each non-sequitur, then your immunity to inductivism, to any too-high standard technique, is high. Congratulations!
Let me recap, in reverse order. If it turns out that an action is irresponsible, this is no proof that the contrary action is responsible: a very different action may be called for. For instance, on new ideas, the contrast between publishing and non-publishing it is such: the best policy is to write up your ideas and show them to increasing numbers of friends and colleagues, take account of their criticisms, and then publish to elicit critical notice of wider circles. Similarly, concerning cooperation or non-cooperation with the administration: it is best to cooperate with them on your own turf, on your own terms, and by advising them on minor academic policies.
Next, the contrary action may be neither responsible nor irresponsible but either a matter of taste or a matter of expediency—e.g., publication for the sake of promotion. A friend of mine who as a rule does not publish is a good university teacher. He knows he is unoriginal. They had to promote him to the level of associate-professor—he did not aspire to a promotion but by regulations they had to promote him or fire him—so they forced him to publish two or three papers; and this he did and very unpretentiously so, jotting marginal comments on marginal comments on a rather marginal item that personally interested him to some degree.
Another friend of mine was on point of being forced out of Academe for want of publications. Friends helped him turn some notes of his into something publishable that got the problem of his promotion solved by minimal cooperative work.
An action contrary to an irresponsible one may be defended as responsible, but not obviously so. Traditionally, moralists found greed selfish and irresponsible. Bernard Mandeville and David Hume and Adam Smith defended it with quite ingenious arguments and they had sufficient following to make a difference. Smith viewed his opponents’ ideas as prejudices, but that was his (understandable) Baconian bias. His Baconian bias must strike a modern reader as incredible. When he sent Hume’s autobiography to his publisher, he added to it a short obituary. In it he said, of Hume’s philosophy I shall say nothing, since those who endorse it admire it, and those who do not despise it. Most philosophers today admire Hume’s philosophy without endorsing it. Yet to blame Smith for his having been an orthodox Baconian is a bit too rash, just as to tolerate a similar bent today is a bit too indulgent.
Back to the question of the responsibility of any defense of an action as responsible. The responsible defender has weighed existing contrary arguments and found them wanting. Upon hearing of a new argument not previously heard, the defender shows interest, out of responsibility if not out of intellectual delight, and then shows readiness to reassess previous attitudes and actions. Even if the reassessment comes to the same conclusion, however, it is a new assessment that needs presentation as new.
Here, again, we fall on accepted standards. Civilized law considered responsibility as essential by condemning its opposite: negligence. It distinguishes between reasonable error that is permissible and one due to negligence that is culpable. If the error in question is common, even if experts know better, negligence is not the default option; whereas, if public opinion rejects the error as a superstition, then even sincere belief in it does not exempt its holder from responsibility. This holds for almost all civil matters. When you try to apply it to scholarly or academic affairs you reach Popper’s philosophy—but Bacon’s philosophy is an obstacle against such a move. Fortunately, no death directly results from this stumbling block—only endless agony and sometimes even slow death. Life in Academe is nonetheless much happier than elsewhere—chiefly because Academe rewards its members both in cold cash and in refinements of all sorts: it is the only institution that allows eccentricities as a default option and so it does not punish excellence (except at the insistence of the excellent).
Just a minute. I said that not only for the insane but also for the ingenious accepted standards would not always suffice. Indeed, I do not think Semmelweis, for example, could benefit from my present volume to avoid the agonies and pains and frictions of his researches (which cost him his life). He even changed standards of medical responsibility. Well, I never said my book is a panacea. There are enough agonies in the world, and we need not fear that we shall eradicate them all.
Let me conclude with mention of a literature that criticizes Academe that I assiduously ignore here. Its major thrust is that the academic system no longer fulfills its social task so that it does not hesitate to cast its framework in unpopular socialist terms. Its criticism is excessively severe and it makes no proposal. By contrast, my view of Academe is favorable, I offer some simple proposals, such as replacing on-campus lecture-courses with some up-to-date recorded lecture-courses that students may download, and my concern is primarily your individual troubles; the ills of society at large come next, as they take time to fix and you should not suffer needlessly waiting for this to happen.
I do not want to write this section either: just contemplating it makes me uneasy. When you see life in the raw, you notice that heartlessness is regrettably too common. Even though sadism is not common and malice is rare, their effects spread rapidly. When you lead a sheltered life in the ivory tower among refined and cultured people who are surprisingly well off by any reasonable standard, you shudder to see so much of the same. I admit: it is heartbreaking.
I once tried to solicit the good will of a friend of mine to help another. They were both painters, A and B. A was both popular and a leading academic, successful and rising; B was jobless and a total failure. A did not want to listen to me. He had considered B good enough as a commercial artist, he told me, and accordingly he had tried to help him get a job. B turned out to be too proud, too purist, or too ambitious to become a commercial artist—even temporarily; he preferred to starve. I said to A, who was a friendly person, that his lack of liberalism or lack of familiarity with the seriousness of the situation was surprising, since B was on the brink of committing suicide. Let him, said A somewhat rigidly, signifying that the conversation was over. And B did—shortly after.
B’s suicide shocked me. This shows that I had not taken my own prognosis as seriously as I would if I were cleverer and better informed; so, obviously, A too had not taken the possibility too seriously. Yet the fact remains: we both neglected to help a friend when he was in need—of which need we both knew, though not fully. Somehow, however, I consider A’s neglect significantly different from mine. Truth to tell, nobody can assess the nature of A’s success, and as rumor has it he is himself not so very different from many a commercial artist; quite possibly the coolness with which he had dismissed B’s case betrays a certain degree of awareness that B, in his silly pride and puritanism and ambition, had shown a strength of character that A would have gladly bought for some high price were it on sale.
I told you I hate to write this section. I am so very angry at B that he put an end to his life for such silly things, and I am similarly exasperated at those colleagues of mine who care so much about their work that they cannot enjoy it and I am so exasperated at the other colleagues of mine who cannot be bothered with it all either. I suppose I am crazy. You, at least, are not very surprised.
Some academics need to be heartless—from fear of excess self-revelation. Emotional Problems of the Student (1959), is a series of essays on psychiatric experiences in Harvard University, compiled by G. B. Blaine and C. C. McArthur. I find it so revealing in that respect that I can hardly do it justice here; if you do not believe me just get a copy of the book—it is available also in paperback and in digitalized versions—and check it for yourself. The only human part in it is the introductory contribution by the then leading American psychiatrist Erik Erikson. It tells of William James’s severe depression in his early adult life, of its philosophical aspect, and of James’s recovery through serious philosophical investigations (chiefly his overthrow of determinism to his own satisfaction). In itself, Erikson’s essay may be unhelpful, though his general tenor is nice. Alas, his view does not leave its stamp in any of the subsequent essays. These are largely technical; I was particularly impressed with the story of how the authorities managed to keep an eye on a student with aggressive tendencies and put a hand on him just at the point when he was going to harm someone. Another essay, allegedly on psychiatric problems of medical students, explains the difficulty of rejecting candidates with obsessive tendencies. The technicality and impersonality of the discussions are remarkable. Only at one point the book’s tone is different. In the essay on the role of professors it is narrated (pages 20-21) that a student obsessively afraid of failure was much relieved by his professor’s confession that he had seen failure too—in his student days. The funny thing is that the author, Willard Dalrymple, takes it for granted that the student could not imagine that his professor had ever failed. He does not even consider the hypothesis that possibly it was not the content of the professor’s confession but the fact that he was friendly enough to confess anything whatsoever, his becoming a little human for a slight moment that comprised an encouragement to the terrified, depressed, bewildered student. The writer may be right, of course, but one may wonder why professors are so heartless that he finds exemplary such a minor event as some personal contact between professor and student. The indication here is staggering, and one must assume that Harvard professors not merely feel indifferent to students but rather positively fear any personal contact with them. Their reticence is thus more revealing than hours of confessions.
Students are supposed to be adult and independent when they come to college, and so professors consider themselves quite justifiable in feeling no educational responsibility toward them. If this were all to it, there would be some occasional and casual and spontaneous and unintended human contact here or there and this way or that way between professor and student who meet, after all, quite regularly and who are meant to meet in office-hours and in departmental parties and in official advising sessions and on occasional encounters in the street-corners or bus-stops or parking lots. Not necessarily educational contact, but perhaps some other contacts, since students do need education, especially in these days of the enormous swelling of Academe, any personal contact can be educational. The swelling offers professors even more justification: fellows who do not feel at home here are better helped by facilitating their leaving college at once rather than by letting them suffer a semester more—kick them out as soon as it is reasonable! We have too many undergrads!
The same goes for graduate students. Nowadays higher-degrees-granting institutions offer masters degrees solely as a compensation for rejecting students from doctoral programs: M.A., more so M.Sc., has become the poor citizen’s Ph.D.—the poor in spirit, that is: non-academic high-school teacher, industrial consultant, education ministry employee. What makes a student fail to get into the Ph.D. program we do not know—most surprisingly, the case is almost entirely unstudied. I cannot generalize my personal experience since the sample is much too biased and much too small to be likely to be representative. I have met a few desperate or almost desperate cases of graduate students who needed help badly, and some of whom I have helped, if you allow me a little boast. Others did not help them, partly, at least, from being too anxious to get rid of them; partly, at least, because it would be too painful and too self-exposing to help them, even to show concern with them. I am speaking of bewildered rebels, you must know by now. They may represent a small minority, but their potential quality seems to me to be more valuable than plain statistics can indicate.
We should move then, from graduate students to teaching fellows and young instructors. The Atlantic publishes repeatedly (last time in October 2012) complaints that all too often students (and their parents as well as their neighbors, be they semi-academic or quasi-academic or pseudo-academic) are attracted to an institution in which a great light resides, only to be taught by minor figures like teaching assistants. The complaint is general. A Patient thus may pay an arm and a leg to be under the scalpel of a leading surgeon, only to find an assistant performing the operation instead. But let us stay with schooling. The complaint is silly not merely because a young student today seldom has any idea of what to expect from college—from its great lights or from its small lights. The complaint is silly: if it is hard to get seats for a performance of a pop singer, why should it be easier to get seats for lecture of a pop professor? An artist, at least, is usually a good performer who can conjure an air of close intimacy with almost every member of an audience consisting of thousands. A professor, even the greatest expert of the time, is not likely to be a good lecturer, much less to be good at creating a semblance of intimacy. A professor can be of use to only a handful of close research students. Public relations offices and recruitment offices will not say so in public lest the profession’s mystique suffers. An M.I.T. president once said, the currently practiced arrangement of relegating most teaching to budding academics is good both for the student and for the teaching assistant, as it is quite a challenge; it may result, that president added, in some wonderful experiences. Admittedly, anything can happen.
Still, what that president said is right. It may be an exciting challenge to be thrown into the water, or to the lions. The questions are different, nonetheless: how much of the arrangement ends up satisfactorily and what happens to the rest? Are the people involved ready for the challenge? How much the experience of the less successful helps improve the system? Nothing. Not a murmur. If the challenge leads to a failure, then the obvious scapegoat is the very same teaching-assistant, and the remedy the system offers for its own defects is, as usual, mercy killing. Admittedly, very good teaching-assistants who happen to be also good scholars and positioned in very fortunate departments may be lucky and find jobs there or elsewhere. This is not to say that ambitious teachers with a flair for education have better chances to become tenured academics. Their teaching success will seldom be noted, and never appreciated in any detail. If successful, they will seldom be available for copying, chiefly from indifference and ignorance. Failures, however, may lead to severe penalties even though successes hardly ever count for much. Contrary to the said M.I.T. president, leading universities care little for the quality of their teaching. If they care at all, they wish their advanced courses to be up-to-date. This shows how much they trust their students: well-trained students can update their training, whereas all the up-to-date training, we hope, soon becomes outdated.
We now come to colleagues proper. (Is not it dreadful to be so systematic?) Their quality as teachers concerns no one; their qualities in the department too depends on their ability to proceed with no scandal. Their scholarly work, however, is something of some interest for this section too. The first popular and pernicious rule here is, specialize and declare total ignorance of the work of your colleagues, whether in your own department or elsewhere, except for the very few who share your specialism with precision. The way to do this is to brand everyone as a specialist in some subject within the department’s authority or better a sub-sub-subject. When you will start making headway, you will be branded a specialist too, I predict. Anyway, the specialists in your field are all worthy, hardworking, and serious; and they are all friends of yours (you send them your reprints). All, that is, except one; the scapegoat. Colleagues have already made sure the chosen fall guy cannot kick back. Attack the one person everyone attacks. So do not be clever and choose a new, dead scapegoat: there is often more life in a carcass than in a moving target, since some carcasses are loaded enough to feed hosts of parasites. One person will turn up, whose lifetime task is to defend the honor of the dead colleague you are attacking and you have hysterical fanatic foes avidly jumping at you. (Watch it: they need a scapegoat too, and you may be it. Avoid this predicament!)
The standard recommendation is to specialize in order to avoid attacks. It is hard to apply this to research branches that attract general interest such as education or psychology. There are better branches. I remember once a philosopher newly appointed dean of arts who had to deliver an inaugural lecture on education. He spoke of education as reflected in the Kalevala, the Finnish national saga. Smart. If you too are an incurable coward, then he is your model.
Suppose you are a budding economist. If you try to attack Léon Walras as an a priorist, you will attract fire; if you attack Ludwig von Mises as an a priorist instead, you will meet with approval: everybody will know that your heart is in the right place. Why and what for? Simple: if you merely advocate more empirical research in economics, you may be preaching to the convert; but if you attack von Mises you thereby prove that there are still some heathens about: everybody does this. Admittedly, Walras was a kind of a priorist too (Preface to his Elements of Pure Economics, 1874). Too many people still admire him, and rightly so; but somehow von Mises failed to gain the same wide recognition for his significant work, partly at least from the extremism of his view of economics as a branch of politics. So let us have a go at him! Or, if your field is not the methodology of economics, but economic theory, you can attack Karl Marx—western economists consider him as dead as a dodo, and his outspoken defenders, followers of respected Piero Sraffa, cannot do you much harm. Or attack the a priorism of Descartes if you are a methodologist or a historian of science—this practice goes on successfully for two centuries. Even the famous physicist and historian of physics E. T. Whittaker who has published a book on Descartes’ problem and proved himself somewhat biased in favor of Descartes, even he attacks him rather violently—out of the sheer force of inertia, perhaps. You can ridicule Bacon, especially if you are a Baconian yourself—this is a two centuries old practice, but still fresh. It is safe.
I have told you that my friend who would not help the other except to get a job as a commercial artist was himself somewhat of a commercial artist in disguise. I also told you I did not want to write this section. I had stubbornly decided to write a plan of a book first and follow it up without change. I had intended to sum up here the points of all previous sections and show that cowardice leads to the various techniques outlined above culminating in hitting a poor opponent too weak to respond in kind. It was rather silly of me to lead myself through such a maze—you, I hope, at least received some entertainment—only to discuss the obvious, namely, that the coward prefers a weak opponent, that often heartlessness is a mark of inner conflict (Dostoevsky) and of cowardice (Mark Twain). This is general knowledge.
Never mind the rest of the world. You and I know what has misled me: I tried to discuss the ideology of the academic coward. It is news to me that we have to bother about any ideology that cowards do not take seriously. True, if you happen to be beating a Jew, it may warm your heart to support your conduct by the knowledge that that fellow has killed the Son of God. As you beat a Gypsy too, you do not really need that support; so to tell you that Christ was crucified millennia ago is useless. Hence, I was in error in pursuing cowardly ideology: it does not signify.
I should not have spoken of the coward at all, but of the possibility that the same ideology that once flared courage in simple people’s breasts may later make brave people act cowardly. It is better to discuss courage and cowardice. Schrödinger won praise for courage when he resigned his job in the celebrated Kaiser Wilhelm Institut in protest against the dismissal of Einstein from that institution. That reaction deeply puzzled him. His action always looked to him a natural, normal response; all his life he wondered why the other members of the same institution did not act the same way. The one who most obviously should have done that was the boss of that institution, who, after all, should have been the one to decide upon hiring and firing. The boss, Max Planck, was no coward and no fool, and yet; Einstein was his friend and testified to his rationality even concerning his political view that was German nationalism. Yet Planck did act like a coward, and it was his faith in the German nation and its historical destiny that was his pitfall. It made him see in the Nazis nothing more than a monstrous exaggeration that had to be a mere passing phase, and he was willing to make small compromises in the meantime. His error cost him more than Einstein’s friendship—the Nazis killed his son. This, however, is hardly instructive: with such people as Einstein or Planck around, the boundaries between private and public are less clear than with people of lesser susceptibilities lie you and me.
It is the same, I fear, with the cowardice of some academics in their chosen intellectual works. They may be cowards first, and find an ideology to suit their temper second. They may be in the ivory tower by sheer accident—what with the enormous swelling of Academe. Or they may have escaped into the ivory tower from fear of the challenges of life in the raw. They may find in Academe an ideal life devoted to insignificant work that interests no one with the rewards for the work as if it were significant and with no one to challenge them or their output or their positions in this world. No one, that is, except the few pests whom we should swiftly and ruthlessly destroy in order to make Utopian perfection materialized. This, said Popper (The Open Society and Its Enemies, 1945), is how these people use dreams of heaven on earth as reasons for making hell on earth. All the same, Academe has to allow for these people and for their ideology. Regrettably, this academic freedom has its casualties: that ideology is traditional and it turns potential braves into actual cowards.
For, truth to tell, I have forgotten in my haste to tell you what is objectionable about having a scapegoat. Most scapegoats, you may notice, are dead; others may find any publicity flattering; and others do not mind one way or another about such things as public opinion. So what is the fuss I am making, if justice does not prevail anyway, and if all I am discussing is a handful of scapegoats, some of whom do not even mind being scapegoats?
I should have explained this earlier. First, my major concern is to prevent you from serving in this function. (Some of my students have, and I could do nothing to help them; I am ashamed to admit). Anyhow, we may notice the role of the official scapegoat: the Establishment shows prospective dissidents the official scapegoats to dissuade them. If this fails, they identify all dissidents with the official scapegoat (Orwell). Academe has them too, although there their role is marginal: in physics it is the determinist rear-guard (this, incidentally, no small honor as these are Einstein, Planck, and Schrödinger); in biology, it is Lamarck or Lysenko. You may wonder how much truth there is in Butler’s critique of Darwin and your colleagues may wonder whether you are not a heretic. If your filed is different, you can test your sense of orientation by looking up the question, who is the scapegoat of your current academic environment.
Let me say what all this is to you. In brief, the Establishment may declare any new material you produce (prior to thirty years of hard work) original, which is unproblematic, and it may declare it essentially the same as the scapegoat’s junk, which you should try to avoid. This may discourage you and render you a coward. It is my task (self-appointed, I admit, but do not complain—you can always dismiss me, you remember) to see to it that you are not so easy to discourage. But, you may say, you are not producing yet; so what is the hurry?
How right you are! You have to produce something; fairly soon if not immediately. Even if you are still a student you are better off with some measure of independent judgment concerning your next step—which means at the very least that you might as well start producing some independent and operative judgment. I have high regard for you, but allow me to question the wisdom of your next idea or guideline or whatever it is. However, if you are lucky enough to have your output meet with some criticism, that criticism may be a corrective on your road to better judgment but it may also be a means to discourage you and make you toe the party line. The way to discourage you is—you will not believe it is that cheap, but I fear it is very likely to be—the identification of all deviation with the standard deviation, with the duty scapegoat.
Well, then, let me try and see how the various techniques I have outlined in previous sections culminate in the technique of kicking a colleague while he is lying down. It is rather obvious—or rather, it should be obvious to you by now. The general theme is this. Academe does reward the daring, brave intellectual, even though it advocates cowardice. The defenders of the cowardly standards are in an exceptionally weak position and they have to attack enemies at their weakest. The academic ideological bore will show you how a scintillating non-bore has ended up in a very bad shape; the one who took up an exciting project and for thirty years or so was stuck in a blind alley. This, incidentally, I met regularly when I was studying physics and when students complained about the routine character of the work our professor dished out for us. This now seems to me too obvious for words: behind all this stood the cowardice of those who fear adventure. Do you wish to fail like our scapegoat? It is equally clear that it is pompous and pious and hostile to the intellectual adventurers on whose products we the more mediocre colleagues live on. The adventurer may, indeed, fail—and then become scapegoat. Expressions of dedication and loyalty to science as well as to our seniors are often prompted during the process of imposing boredom on students. The trouble with renowned scapegoats is, they never did simple homework. Of humility, I need not say a word: who do you think you are, anyway? Darwin? Einstein? Cantor? So a small project is too small for you already? It bores you not to be an Einstein? If you were an Einstein, then that would be all right, but chances are you are as much of an Einstein as our cherished scapegoats are; they too had dreams. How irresponsible of you: do not you know that someone has to do the dirty work? Should we all be Einsteins? Look at all those who tried to be original—they almost all break their necks and end up producing nothing better than our notorious scapegoat.
Scapegoats seldom receive mention in classrooms and in lecture-halls. Although every mention of them there is pregnant with significance. The real place where they work overtime is in consultation, in consultations with bright-eyed, opinionated youngsters whose spirits have to be broken for their own good as well as for the greater glory of the commonwealth of learning. Keep up your spirits and do not be drawn into the practice of discussing scapegoats except, perhaps, when you are enough of a scholar to exonerate those scapegoats who deserve better recognition—for our own sakes, not only for theirs.
There are a few things I am rather proud of, small as they are. One of them is that I have succeeded to contribute to the drive to exonerate Dr. Joseph Priestley, the notorious last advocate of the defunct theory of the phlogiston. What contemporaries and historians have said of him! Though he had discovered oxygen and I do not know how many other gases, though he was a Fellow of the Royal Society who won the Copley Medal and whatnot, they often crudely and systematically maligned him. Inductivist historians like J. P. Hartog and J. H. White have attempted to clear his name. Not successfully, need I say. Then came others, including J. B. Conant and Stephen Toulmin and Robert Schofield. Even they did not fully explain the situation: it took Popper’s methodology, plus Popper-style observation of the limitations that a Baconian in Priestley’s position had to labor under, to see how much ingenuity there was in his output and how valuable was his persistence and this criticism of Lavoisier—that enabled Davy soon afterwards to overthrow Lavoisier’s doctrine.
I tried to do the same for Dr. Benjamin Rush. Even his biographers and defenders had no defense for his theory that bloodletting is a panacea. For millennia, almost all western physicians practiced bloodletting and no protest led to any seriously critical study of that practice. When Rush practiced bloodletting extensively, protests were heard for the first time with some effect. When George Washington fell ill Rush was not invited to his bedside. Washington died soon after—seemingly of excessive bloodletting. From then on attitudes changed. The eighteenth century had seen the last flare of cases of non-specific nosology, most of them were refutable and refuted. In the early nineteenth century the first important non-specific school rose, and one of its chief targets was Rush’s doctrine. New statistical techniques had to be devised in order to refute it; and it was refuted, and bloodletting stopped (Adolphe Quetelet).
We all err, but the consistent and somewhat better reasoned error is more easily eliminable. Even Bacon knew that. When he deviated from his inductive philosophy and rushed to make a hypothesis, he said thus: “truth emerges quicker from error than from confusion”. Hence, the very conspicuousness of scapegoats that is the reason they were chosen for this unenviable role makes it more likely that they were great intellectual adventurers than the slobs, evidence to the contrary notwithstanding. I am speaking of the great scapegoats, of course, not of the small fry denounced by other small fry. What is common to the great scapegoats and the small, is Bacon’s attitude towards them. You cannot admire and criticize a person at the same time said he. You can: I have little admiration, I admit, for Bacon’s present-day admirers who cannot criticize him; for Bacon himself I have both much criticism and much admiration—and for one and the same reason.
An old professor appears in some of C. P. Snow’s novels, an expert in a recondite topic—Nordic saga—full of it, enjoying it, knowing its narrowness and in general suffering no nonsense though quite willing to play the buffoon. I do think Snow has described a true case here. Many scholars and researchers had placid characters and fully succeeded to build for themselves a peaceful philosophy of life. My heroes in this line are John Dalton and Thomas Thomson—both English chemists of the early nineteenth century, and close friends they were. I resist the temptation to digress into biography again. Most of the peaceful inductivists, after all, were small contributors to the stock of human knowledge, who aspired to nothing more and who were amply rewarded. Dostoevsky has observed with a measure of bitterness, Nature loves those who want the attainable and offers them what they want, twice and thrice over. Perhaps. In any case, peaceful inductivists leave others alone, and the others reciprocate. Even if this volume, in being an attack on inductivism, were (quite unjustly, of course) deemed an attack on them, they will barely be perturbed by it—if they were, they would not be as imperturbable as they usually are, as Dalton and Thomson were.
Lord Rutherford was one of the greatest as well as one of the most renowned physicists of the early twentieth century. I cannot imagine any contemporary philosopher, least of all one of the few who at that period studied scientific method, would be brave enough to tell him how to conduct his research; and if there were any, he would have not noticed them. And yet he said of methodologists, we researchers will get along in our scientific work well enough if only the students of scientific method will leave us alone and not tell us what to do or how to do it. Methodologists, obviously, were his scapegoats; collectively, since presumably he had never met one in the flesh—unless perhaps by accident and then only to terminate the encounter at once with a well-aimed insult, I imagine. Or perhaps collectively because there was simply hardly any individual methodologist around at that time (Cambridge had one, W. E. Johnston; he spent his time hiding and most of his methodological ideas that were ever published appeared in the work of his pupil J. M. Keynes somewhat later). The only specific individual methodologist Rutherford could have in mind when showing so much hostility to methodologists in general is Rutherford himself. For he was a methodologist of sorts—he even began his early scientific career, says an oral tradition, by worrying himself sick about the (quite insoluble) difficulty inductivism has concerning sense illusion (a difficulty that already Bacon had promised to clear, and that already Bishop Berkeley has shown that it imposes idealism on any consistent inductivist)—a dreadful methodologist was Rutherford, yet the only one brave enough, foolhardy enough, to dare pester such a brilliant physicist as Rutherford.
Do not get upset just because I have said a harsh word about Rutherford. Do not ask me for references just to defy me and expose me and show how irresponsible I am. Would you be half as indignant if I spoke thus of your aunt? If I praised her even more than you do, and her kind heart and her devotion to the family, but dismissed her idea about pop art as somewhat parochial? Why can I not talk about Rutherford just the way I talk about your aunt? Figure this out for yourself if you can.
Next time when you take a stroll around campus, if your path chances to cross the library and you happen to be in no particular hurry, do drop in and look up Aku-Aku by Thor Heyerdahl (1957); or, if you browse idly at the campus paper-back book store—yes, we all know how sadly unequipped they usually are—and you chance upon it, I do recommend that you glance at a copy of it. Or you can find it on the internet. Heyerdahl’s case—I am sure you will indulge me one more digression, especially when it is intended to be brief—is quite enlightening; though Heyerdahl is an anthropologist of sorts, and though, unlike his popular books, his papers for the learned press are written in as dull a style as he could bring himself to use for the sake of true scholarship. In the end of that volume he reports a dialogue between himself and his aku (that is the Easter Islands’ version of the fairy godmother), who gently chastises him and mildly rebukes him for his inability to curb his propensity to speculate and think up wild hypotheses. Heyerdahl sincerely admits the charges and agrees that he is somewhat in the wrong. But the beauty and peace and serenity of the wide south seas and its deep blue skies and its enchanting emerald islands and their intriguing and fascinating inhabitants—all help to expiate his fault, and his aku helps him to conjure the magic necessary to render some of his wild dreams into science proper and kosher and acceptable (so he thought; his peers remained suspicious).
In the previous pages, I have diagnosed many of the small ills of Academe as rooted in ambivalence. I am glad I can check myself. I cannot say why the ambivalence of Rutherford tormented him, as it seems it did, whereas Heyerdahl’s did not. Being at peace with oneself renders one less likely to harbor and transmit academic ills. I suppose they all tormented their kin—but at least not with the malice borne out of self-righteousness borne out of conflict and of suppressed self-doubt. There is nothing like peace of mind to mitigate bogus troubles—like most academic troubles. Let me end with an acknowledgement: I heard this from a once-famous sociologist, Shmuel Noah Eisenstadt of the Hebrew University of Jerusalem. He told me his observations made it clear to him that for the sake of his neighbors he had to be at peace with himself and that this insight had helped him achieve that peace.
We have arrived at one of the most beautiful and admirable things in life, a pleasure to observe, to ponder, to daydream about: intellectual freedom in action.
Dreaming of an impossible utopia—or near-utopia—may torment. Now unlike utopia (etymologically no-where) the near-utopia is there, all the time, yet often people fail to perceive it. Intellectual life is admittedly an escape for many people from all sorts of miseries: they cannot evade all the miseries all the time, but, on the whole, the result is better than expected—especially in present-day luxurious universities. Why, then, are so many academics dissatisfied? Perhaps they envisage the wrong method of escape.
If you want to see what the ideal self-image of Academe is, try to observe it when it is projected and how it sees itself then. Not in public ceremonies, not in hyper-self-conscious exhibitions of True Academe to the whole wide world, to students, to posterity. Observe academics in their intimate, yet not entirely private gatherings, when their spirits are high with genuine academic excitement, when they feel that they get their money’s worth and are all too happy to exhibit their excitement and pleasure to their peers and equals and heighten it with true participation and generous sharing. Go to the humble parties thrown to the very distinguished visitor before or after an interesting guest-lecture, to the lecture itself and to the ensuing discussion—but take care to disguise yourself as a worthy peer or your very presence will destroy the very unselfconsciousness essential to the activities. (This belongs to arch-anthropologist Malinowski: do not take the natives’ descriptions and exhibitions of their own conduct at their face values; rather mix with them, behave like one of them, and observe critically. Do not listen to their words, echoed Einstein with his peculiar sharp-but-good-humored irony; fix your attention on their deeds.) Look at how academics laugh, at how they applaud, see what tickles them most, to what they applaud most on such occasions. You need not go to Timbuktu or Zanzibar or the Trobriand Islands for a proper anthropological exercise.
Do not think that I mean this assertion in jest—the exercise I have in mind is anthropological indeed, and I describe conventional modes of conduct, though of the somewhat esoteric kind. The anthropology of laughter is legitimate sociology: with whom a person is in joking-relations so-called within a given tribe, and what expression of such relations is legitimate. These are not matters of temperament but of conventions—given to wide temperamental variations, of course. Do professors joke with research associates? With graduate assistants? Even at tea before a distinguished guest-lecture? Do they laugh there? Do students? How do they laugh? Loudly? Softly? Or what? Among young Hassidim, laughter must occasionally be wild; not so among their elders. Some tribes have a standard way for laughing of all sorts. Professors who study the history of the laughter at intricate and highly erudite jokes laugh softly—with a dint of permissible self-indulgence and a touch of condescension. The laughter of physicists is different, especially since they are prone to laugh most at the joke made at the expense of some poor scapegoat, whether an erring physicist, or a plain outsider or—and this invariably brings the roof down—at the expense of that member of the audience whose mode of participation in the discussion after the lecture is too critical or cantankerous, or who otherwise blunders. Physicists laugh at such jokes heartily, confidently, loudly—not, Heaven forbid, discharging hidden anxiety, but freely and so even somewhat merrily. At least their merriment is heart-warming; sociologists try hard to imitate them but dare not express merriment—they do not possess self-assurance sufficient for that. Anthropologists laugh most eagerly—even at unintended semblances of jokes—I think in order to stress their appreciation of the speaker; but they will not permit the slightest smile when the speaker describes a genuinely ridiculous primitive practices. It is the same with psychoanalysts when they talk about sex—what else? Things can be much worse. For my part, I dislike a lecture beginning with a barrage of light-hearted and well-worn poor jokes—as if the lecture were an after-dinner speech or a stand-up show—after which opening the lecturer engages in a most ceremonious throat clearing to signify that the jokes are over and that work begins in earnest. This is a practice common in the less scholarly liberal arts and of the various branches of Academe devoted to the study and dissemination of education. I much prefer the practice of physicians: in public they prefer not to laugh. Period.
I once spoke to a group of American anthropologists about British anthropology. They laughed loudly, and it seemed to me demonstrative: it seemed to me their message was, you can criticize British anthropology to your heart’s content: as Americans, we are immune to it. So I spoke about the anthropology of laughter. That stopped the laughter dead.
How do philosophers laugh, you ask. I do not know. I suppose the atmosphere in philosophy lectures disturbs me so much I can barely retain the power to observe them with the necessary scientific detachment. Some philosophers, I did notice, however dimly, are eclectic in this matter, depending on their philosophical bent: positivists tend to imitate physicists (not successfully, though), existentialist tend to imitate historians or others in liberal arts—except when they are embarrassingly solemn, need I say, when their intended jokes are mere cheap expressions of derision. For my part, when I lecture I find the atmosphere so very oppressive that I ramble terribly at first until I throw in a wild joke for a good measure that catches my audience so off-guard they laugh their heads off. After that, my chief concern is less to keep the flow of my discourse and more to prevent the audience from returning to solemnity—for which end I use all sorts of erratic techniques, from wild associations to sheer verbal pyrotechnics including small jokes or even occasional big ones. Jokes are no joking matter.
If you are interested in more details, you can read those of my lectures that are printed. Well, I am clearing my throat now. I hope you liked the bagatelle, but I had my reasons for inserting it in this section rather than elsewhere in this volume: it is getting too solemn to my taste. Jokes aside, how academics laugh varies, but it is largely a matter of convention. Similarly with other practices, to which I shall not address myself—having cleared my throat already. The practices are varied, but obviously, in most cases they are manifestations of the same general idea of togetherness, harmony, full mutual accord and sympathy; no strife, no censure, nor criticism: just peace and admiration of all scholars and scholarship. This is tribalism. The confusion of criticism with censure clinches it. The same goes for the fear of all censure as if it were the plague. So let me tell you of my projected image of academic near-utopia.
No togetherness (except in my own family), merely mutual respect; no harmony (except in great intimacy), but friendly feelings; and a little generosity will do no harm. And while the game is on, simply total disregard for personality, rank, anything except the game itself:—as in a high-powered good game of any kind, personal asides are permitted and even encouraged provided their content does not interfere with the game and the effect they psychologically produce is conducive to it. Players hit hard and enjoy and appreciate it equally well. They allow for switching positions with ease, and they always do this explicitly and clearly. Even a lecture must report a dialogue. (In Plato’s Symposium, the rules of the game force Socrates to make a speech; he does. In it he reports a dialogue, thus having it both ways.) Defeat, difficulty, possible vagueness, ignorance of what your interlocutor assumes that you know well—everything is fully declared and conceded without effort and without rancor. It is not who wins but how you play the game: if parties play it well enough it fascinates and excites. Debates should follow the rules of debate; for instance, interrupting speakers is permissible only if it heightens the excitement all round. Strangely, perhaps, I have witnessed two classes of intellectuals among who play the game better than in any other class of players in Academe—lawyers and electronic engineers. Do not ask me why. Talmudists can do it best, but they usually do it while studying the Talmud, which study imposes restrictions on their debates that I do not recommend: the Talmud allowed any premise to be open to challenge, provided the challenge does not appear as an assault on a basic religious tenet.
I have at last come to my point concerning the natural history of intellectual courage: it is not who wins but how you play the game; let the best party win, by all means; and if the best party is not my party, we shall do something about this in due course, not just now. I shall discuss techniques proper in the next part, and explain why, and how, one of the worst violations of the rules of the game is to anticipate the critics’ next move and to answer it without first having it clearly voiced. (Who expresses it, the defender or critic, matters little.) Here I can say that anticipating outcomes of debates always amounts to cowardice, and thus to stupefaction.
Admirable Benjamin Franklin was an excellent debater though in his Autobiography he declared that a gentleman does not argue and that in science disagreements resolve faster without debates (since these only increase the stubbornness of losers, making them less prone to give up errors). In the same Autobiography he brags in good humor about his dialectical abilities. He tells us he had defeated a roommate of his so often that the poor fellow learned to refuse to answer even the simplest questions put to him, lest they were traps and some consequences to his answer of which he was not yet aware might be later on used against him.
Poor fellow. His reluctance made him unable to learn more consequences from his views. He preferred not to lose debates even at the expense of curbing his intellectual development. Often one feels unable to lose a given debate. If one were eager enough to know what one’s opponent might say, the debate would proceed. Moreover, one may stop a debate for a while and debate the grounds for one’s fear to lose it. One is equally afraid to lose those grounds, or ashamed to confess fear, so that one does not even prefer discovering to what extent one’s fears are reliable. One does want to learn, but …
Which brings us back to the old point of Johannes Kepler: curiosity leads to intellectual courage more than anything else does. So let us observe Kepler’s point a bit more closely and critically.
To begin with, take a simple fact. Many dull lecturers are capable of delivering truly exciting and amusing lectures. They would do so on very rare occasions, among close friends, perhaps to audiences of one or two. I was fortunate to belong to such audiences, and I cannot tell you how great the experience was.
Have you ever watched a good performance of Lear or Hamlet without attention to your high-school-Shakespeare? It is not easy; Alfred North Whitehead confessed high school had spoilt Lear for him for life. It took me hard work to learn to enjoy the Bible—by an around-and-about way, reading The Epic of Gilgamesh and then Ecclesiastes that schools neglect. Have you ever read those for diversion? The experiences are elating. Perhaps you should listen to them recited (there is a BBC version of Gilgamesh on the Web). The experience of listening to an excellent recital or lecture is very interesting. You sit and in no time you lose all sense of time; you cannot tell whether time passes extremely fast or stands still—it looks as if it is both; a few minutes later, somehow the spell breaks and you look at the clock and observe with a real shock that only a few minutes had passed by; you can barely stop to contemplate because the speaker has captured you again and enchanted you; at the back of your mind some observer still keeps awake and notices that the whole visual world has altered—colors, distances, sounds, all seem both more intense and strangely more remote: in a sense you are transported into a Platonic Heaven where somehow the speaker and you share ideas and observe together the development of the narrative of the discourse—logical steps from one proposition to another: things fall together so beautifully that associations and imageries turn pale, the speaker’s shape and voice recede to a remote background and what you hear are not sounds or even words, but ideas.
A beginner reader reads letters of the alphabet, an advanced reader reads words, phrases, and even sentences. Reading normally advances to the point that one is unaware of letters, but one is seldom unaware of words and sentences—this happens only when one is so utterly absorbed in reading or listening and when reading or listening is so effortless, that one absorbs chains of ideas or propositions. It is the same, say, with driving: a beginner uses the accelerator, the breaks, the steering wheel; an experienced driver, driving effortlessly, merely decides what to do and accordingly wills the car to act this or that way; the whole complex of body plus car—one phantom-body so-called—somehow obeys. When an experienced driver has to keep a wide car on a narrow road, has to develop the feel for it in order to drive in a relaxed mode. Similarly, upon hearing a high-powered lecture one has to learn to relax; this leads to the same experience, much more intense.
It may puzzle you why so many people go to listen to boring lectures, just as it puzzled me in my childhood why adults go to synagogue or church. Some go from a sense of duty, some to escape an even more oppressing boredom at home and with the hope of meeting an acquaintance before or after the ritual. Some remember an occasion of an exciting event and they keep going a hundred times for fear of missing the rare occasion on which the event is so rewarding. It is like the purchase of a newspaper daily so as not to miss the rare occasion of an interesting news item or comics.
Why are interesting lectures so rare? Why do good speakers prefer to give dry academic lectures? They are chicken, that’s why. Once I heard a lecture delivered on the recent history of a local geological survey that outlined achievements over the last few decades. After the lecture was over, in a small circle, someone who knew the history first-hand and who was a friend of the speaker said, “Once, on a glass of beer, I may tell you how things really happened”. What a pity.
Whatever the causes, and I have tried to analyze some of them in previous sections, one thing the above anecdote makes clear: intellectual cowardice is not so much a matter of feelings, as conformity to certain conventions, to accepted rules of good form. Which is to say that Kepler’s view of the matter needs some supplementation.
What makes the sociological approach so much superior to the psychological approach, Popper repeatedly observed in his lecture courses, and he was a superb lecturer, is a simple fact, and an obvious one. Psychology all too often centers on motivation, whereas sociology plays down personal motives as far as possible under the circumstances. It cannot eliminate all motives, but it can introduce some uniformity, some leveling, by playing them down. For example, consider all those who work for one given railway company on all the complex motives and mentalities that they possess. I shudder to think of the complexity involved in a simple railway timetable, said Bertrand Russell. Yet somehow, railway timetables operate fairly well. There was, indeed, in the British humor magazine Punch a cartoon depicting a shelf in a public library under the title “Fiction” with a railway timetable on it. Yet this is something of an exaggeration because, Popper noted, timetable planners as well as sociologists can ignore the deep motives of railway workers and assume that their sole motive in working for the company is earning their wages and making a decent living.
Making a decent living is similarly the motive of many an academic: if they were of independent means they might or might not remain scholars, but most of them would have left the universities. They conform to the rules of the game because they wish to stay in; and one of the rules, they fancy, is to provide dull lectures. Now, some of the rules concern very external matters, such as matters of good form; others are more elusive, and refer to intellectual honesty so-called.
What exactly intellectual honesty is may be under dispute—it is under dispute less than one might expect because the literature on it is scant. Here then are some obvious received ideas about it. Clearly, however, on one point Bacon was dead right: when scholars face refutations of their views, he observed, rather than give them up they make fine distinctions. You hear this constantly: “this is so because”. The right answer to this locution is, “never mind why this is so; you admission that this is so is an admission that your initial hypothesis is refuted; you should say this openly.”
In view of the possibility of violation of rules of the game without dishonesty—either from bad training or from no training—discussion of these violations has to be social. Most people, academics included, honest as well as dishonest, cheat in the game of intellectual discourse in the crudest manners possible, chiefly from ignorance, one that persists from the sheer cowardly demand to have consider it a moral duty to defend one’s intellectual positions to the last, no matter how poor it happens to be: one should not concede defeat unless one is sure that one is hopelessly defeated—which is seldom the case. The permission to reap victory but not to concede defeat before the very end of a debate is obviously dishonest, yet students take it for granted, imbibing it from their elders and betters. They would change the subject and introduce red herrings to get out of a tough spot. They would surreptitiously shift positions and declare they were misunderstood. They would most incredibly reinterpret what they had said in the light of what they have just heard from their interlocutors instead of thank them for the new information. If they were playing bridge or chess or tennis they would blush at their cheating, but being engaged in debate they feel only self-righteous—or rather, their cowardice takes this shape.
Or a person may be very dishonest but play the game with meticulous care in order to gain some personal profit—whether the recognition of peers or the approval of audiences: this is how a sufficiently clever speaker would act whose peers or audiences know the rules of the game well enough. This is why politicians in the countries with educated populations dare not deviate from the truth as much and as obviously as politicians in countries whose populations are poorly educated. This is precisely why we find incredible the exception, the regular, blatant deviations from obvious truths of President Donald Trump: he appeals to his electorate whom he evidently holds in contempt.
A person well versed in the game may induce an opponent to play fairly. Playing by the book may be an ideal, but I am foolhardy enough to think it does happen on occasion. Intellectually honest people may discuss morality with able people and be converted from cynicism or a-moralism to moral life. When this happens, the music of the spheres fills the air.
Still, with all the social trimmings, the present point is simple: intellectual dishonesty is unwise. If you are mistaken you are better off knowing it. If you hope to conceal defeat from your interlocutor but confess it to yourself, you are too-clever-by-half: you have to concede it in company so as to continue the discussion and see what your error amounts to; your interlocutor will either know of your change of view or not be available any more. If you value instruction enough, the mock-shame resulting from the concession of defeat evaporates.
Psychologically speaking, intellectual honesty is not mental but intellectual; the contrary feeling is strictly a displacement, to use Freud’s term. Psychologically speaking, intellectual courage, too, is not courage in the sense of either civil or physical courage since no risk is involved in the admission of error—not even in public and not even in politics; in Freud’s terms, the contrary feeling is strictly a projection: losing it is no loss.
Nevertheless, the denial of all support that moral and intellectual courage lend each other is excessive. If your opponent offers you some new information, you should acknowledge it and express gratitude. If your interlocutor presents you with a new stimulating question, you should likewise acknowledge gratefully that you have never considered it and have to do so now. The moral side be damned—if you do not pay attention to it, you are merely a knave. One who ignores the intellectual side of it, however, is a fool. Moreover, as William Hazlitt has wisely observed, behind a fool stand a knave who exploits him. All things being equal, considering study, folly is more of an obstacle than immorality. In any case, I take it for granted that you are neither a fool nor a knave.
What then is the intellectual aspect and what is the social aspect of intellectual courage? How much courage is due to one’s purity of heart and how much due to one’s acceptance of the norms of one’s society? I can barely find out where to begin—simply because the display of intellectual courage depends on the way one plays the game. Which means that no intellect is an island and we cannot learn except in a society that tolerates learning so that novices may exercise the game in cooperation with individuals who know something about its rules.
Medieval philosophers faced a very difficult paradox: What is higher, goodness by nature or goodness by effort? From the extrovert viewpoint, surely, goodness by nature is preferable as less prone to temptation; from the introvert viewpoint, goodness by effort is the triumph of goodness, and the effort surely deserves greater recognition and reward.
This paradox is insoluble. To the theologian it may present a real headache; not for our purposes, however. Intellectual courage leads to intellectual honesty and vice versa; and they both may rest on moral nature or on conviction, as well as from burning curiosity that may come from any direction and by any contingency; or, they may both come from the willingness to conform or to gain recognition. It all matters not. What matters is not who wins—not even why one plays—but how one plays the game. Yet I face a paradox too: I have said, the rules that Academe follows differ from the right rules of the game; I have criticized Popper for his suggestion that researchers abide by the right rules of the game. Yet now I am speaking of the rules as conventional after all. Where are we? Where do we stand?
Researchers obey the rules of the game; academics often violate them; hence, often academics are not researchers. Many of them are pseudo-researchers or pseudo-scientists, especially the ones engaged in psychology, education, sociology, and politology. It may be more reasonable to view most academics as ones who do not know the rules of the game and so as one who play the game improperly, while the cream of Academe do. The fact is there: those whom peers judge as able to bring the bread-and-butter are popular anyway (with some exceptions, of course: we cannot possibly weed out all the fakes, but it is hard to like them). Now all this is not good enough, since some of the best psychologists—Freud and Adler, for instance—broke the rules like bulls in china shops yet we (rightly) value their contributions from the viewpoint of the game: they have enriched it. The academic tradition gratefully receives all contributions, no matter from what direction they come. The ghosts of so many once-venerated-and-now-forgotten scholars and researchers demand a serious discussion of the question, what are the rules that impose unanimity in selecting people to rest in our scientific Pantheon? Historians of science try to answer this question and I have written a whole pamphlet (that brought me my successful career) ridiculing the modes of thought that most of them employ. Yet my pamphlet is unjust—as all my reviewers have noted, though arguing rather poorly for this—since most of its (just) strictures are funny at the expense of sincere, unfortunate historians of science.
My injustice was in not stating explicitly that I do endorse the list of heroes that these historians laud, although I reject their detailed hagiography as inept. How then do we all decide what researcher deserves a place in our scientific Pantheon? By deciding that the moves that they had initiated are great moves in the game of science that has begun in antiquity and that is still going strong. What is this game? The parts of my pamphlet that I am proud of are my repeated explanations of the importance of the contributions we all agree are important as refutations of great ideas. The best part of that pamphlet is my discussion of Ørsted’s discovery. It was unanimously praised at the time; it still is; no explanation why. He sought it for decades. When he found it, he went into shock and was dazed for three months. (He said he could not account for his activities during these months.) Why? Because he discovered forces that are not Newtonian, thus refuting Newton’s axiom that all forces are central.
This only strengthens my query: why do we all play the game by Popper’s Socratic rules yet claim to do something else?
Answer: we all admit the Socratic rules; the disagreement with Popper is this: he said, there are no other (inductive) rules. Most academics hold the deep conviction that we need more rules. To examine this disagreement empirically or historically we need first to sift the grain from the shaft. This might be question begging. Miraculously it is not so: Popper’s idea suffices: we all agree that yesteryear’s scientific heroes who are now forgotten with some justice. This miracle does not hold for academic heroes: we admire those who occupy the academic hall of fame and we have not yet examined the situation.
Somehow or other, if justice does not prevail in Academe, nor does injustice: if you are intellectually brave, everybody will discourage you—your professor in the name of scholarship, and your Aunt in the name of commonsense. For your own good, the wish is to prevent too much trouble for you in college. Yet if you refuse to obey, you have some chance to receive a reward. Thus, it is higher here than in any other culture or sub-culture on earth—past or present. As I keep telling you, although Utopia exists nowhere, Academe has come nearest to it that we know of. I recommend you take advantage of this obvious fact right now.
Anything to keep my book out of any kind of semblance of uniformity: I now embark on a book-review proper; if this sounds too academic for you, do skip it by all means; but if you enjoy seeing a big shot cut down to size, stay with me in this section a bit longer before you thumb through to the next. I have a friend who greatly enjoys reading my book-reviews: his wife tells him he is a venomous reviewer, and he therefore enjoys noticing, like Gustav in Emil and the Detectives of Erik Kästner, that he is not at the bottom of the class but second from the bottom. I shall try to stay true to my reputation as the very worst.
The book I have chosen to review is perhaps the most celebrated of its kind: it was a bestseller. It is Jerome S. Bruner’s slim The Process of Education of 1960. It is the outcome of a 1959 conference of thirty-five scientists, scholars, and educationists, under the distinguished auspices of the U. S. National Academy of Science. Bruner headed the conference and reported on it in this volume. Some of the participants in that conference have added to Bruner’s deliberations.
Bruner’s volume is the result of a survey of existing large-scale projects of educational reforms in high schools in the United States of America. He worded it in as a manifesto, though its tone is somewhat more explanatory than declarative. The merit of the volume is in the great clarity of its exposition and its brevity, as well as its manifesto-like character. Even leading educational iconoclast Paul Goodman has called it “lovely”. I hope my laboring the obvious in this instance is excusable. I will save you the trouble of discussing with you Bruner’s august career or his title as the father of the cognitive revolution in education, or the nature of that revolution. Here then is the review of his 1960 report.
In his Introduction Bruner presents problems: what to teach, where, and how? What purpose, what emphasis, what effective techniques, should we provide education? These questions arose because of the increased expansion of Academe after World War II and the subsequent desire of university teachers to control high school education. Bruner took it for granted that this desire is positive, and for two simple reasons that he deemed indisputable: academics are intellectually superior to high school teachers and this gives them the right and duty to control all education.
It is to Bruner’s credit that though he was a psychologist, he was critical of learning psychology as too academic or too abstract to apply to the facts of learning in schools. Traditionally, he reported, educators separated transmitting general understanding from transmitting special techniques. Under the impact of late nineteenth-century psychology, the scale tipped in favor of techniques—only to be soon altered. One may wonder how valid were Bruner’s observations at the time, and, more so, how valid they are now. However, this is a separate issue, and at the very least we should acknowledge Bruner’s readiness to be critical of the system, especially since, it turns out, his final verdict on it was in its favor.
Chapter One presents Bruner’s structural approach. It is hardly more than the praise for structures—where structures are general theories. General theories are applicable to specific cases. They may be useful to students, but alas only to the very clever and advanced ones. This fits well the tendency of most teachers to neglect the top quarter of high school students on the excuse that they do well. Most surveys show that this applies to all teachers except for the few most ambitious ones. Can the structural approach—the transmission of general theories, chiefly—offer specific help to those who need it without handicapping the rest? These are the questions the Bruner raises and plans to discuss in later chapters. He does not.
Chapter Two advocates the structural approach. Neither facts nor techniques are educationally as important as some familiarity with the most general theories available. They are important for those who will not be specialists, who will thus need only a general outline of the subject; they are important for ones who will later on become experts: their high school studies should facilitate their future specialized studies. The trouble is, most teachers cannot convey general theories. Various committees, manned by the very best top dogs, have now grown like mushrooms to aid them with proper textbooks. Bruner approves of them with no comment. The poor teachers who desperately depend on their textbooks because, Bruner admits, their dependence is excessive, will hopefully benefit from these. Now, as this process is over, we may seek empirical information as to their degree of success. In the Preface to his 1996 The Culture of Education, Bruner discusses the test-frame for budding ideas and dismisses his 1960 presentation as exhibiting too narrow an attitude toward education. So no test for it.
This matters little: the textbooks on which teachers depend, Bruner points out already in 1960, are insufficient at best: they need supplementation. They include no adequate treatment of either learning or research, and the poor kids need both the latest and most general theories, in addition to some research techniques. Are the kids to whom he refers the top quarter or the rest or all of them? Are they the prospective specialists or others or all? He does not tell. Perhaps this point matters little, since experience in the method of discovery, in Bruner’s view, shows that kids learn faster when they are allowed to discover the material for themselves. This raises a basic question: why not follow the advice of Jean-Jacque Rousseau (Émile, 1762) to leave kids to their own devices in Nature’s bosom? Like all those who discuss the discovery method—now forgotten but still popular in 1960, when this book was written—he seem to have assured success. This is a gross error: even the best researchers have no assurance of success. (Einstein and Planck worked for many years with little or no success, even after they won the status of very great discoverers.) Perhaps all the fuss I am making is about the pretentious title by which writers (including Bruner) refer to this method: “method of discovery” or “discovery method”. In the book under review Bruner takes cognizance only once (and in passing) of the fact that teachers are familiar with the solutions they help students discover.
Let me hasten to add that my discontent is with the ambiguity of Bruner’s text about the discovery method, that it is admittedly widespread. I have no intention to belittle all the texts that belong to the discovery-method literature, particularly not Modern Arithmetic through Discovery. I simply view them in very different light than most writers do, Bruner included.
We are still in Bruner’s Chapter Two. He eulogizes there. Students who understand a general theory understand more clearly the cases it applies to; they remember them better; they can even transfer their development and increase their capacity in other fields. Moreover, learning general theories spreads over various years and affords students opportunities in later classes to re-examine and deepen their understanding of the material learned in early classes. As things are, what kids learn in early classes is outdated or misleading; as the new method has it, the material in the earliest class is already up-to-date only less profound and detailed. Moreover, in such a process students also learn how to apply scientific methods and thus acquire experience in the method of discovery. All these fringe benefits apply to science as well as to mathematics and even to literature (where the laboratory has to give way to efforts to imitate the style of Henry James). These are very general claims; they are very weighty; Bruner makes them rather casually.
Readers who suspect that the above paragraph is a caricature are very nice; sadly, it is not. I invite them to read the Chapter Two and judge for themselves. Other readers may see nothing obviously amiss in what I report. Now Bruner’s wish is that teachers avoid assiduously all teaching of outdated material—even in early high school classes. For those who find nothing wrong with this attitude of Bruner, let me elaborate a bit on it, and explain why to me it seems impossible if not monstrous.
Bruner considers mechanics to offer the best example for his method. By the old method, a student would start with Galileo and proceed to Newton later—perhaps a year later. Bruner should dismiss this as erroneous or misleading. Galileo says that all freely falling bodies accelerate equally. Taken literally, this is erroneous or misleading unless the word “roughly” is explicitly inserted into the wording of the law. For, as Newton tells us, the acceleration of a freely falling body is smaller the higher it is above the surface of the earth. Galileo’s law draws vertical lines from the various positions of a projectile, assuming that they are parallel; the projectile’s path is then a parabola. These vertical lines are sufficiently nearly parallel for practical purposes; theoretically, they are not, since they all meet at the center of the earth. (The parabola is an ellipse with its distant focus in infinity. This is of tremendous importance for Newton’s marvelous unification of Galileo’s and Kepler’s laws.)
As Newton’s theory is more up-to-date than Galileo’s, let us take it instead. It is and less up-to-date than Einstein’s. Newton says that forces act at a distance; Einstein says that they nearly act at a distance, traveling as they do with the speed of light. Should ten-years-old kids be taught Einstein? By teachers who can only do so if they use textbooks prepared by the best brains in the field? Should they approach the study of the conduction of heat from the viewpoint of quantum theory and be told that metals conduct easily because they contain electrons that behave like a gas being rather free and behaving according to the Fermi-Dirac statistics? Or is this theory already out of date? I cannot say, since my knowledge of physics is painfully not fully up-to-date.
All that Bruner asserts in favor of the structural method should apply beneficially to any person who can study in accordance with it. Such a person usually belongs to graduate courses as taught in the better universities; but some kids are precocious enough for this method, and to them, possibly, all that he says in his eulogy may be profitably applicable.
The most advanced theory may become out-of-date, hopefully due to some progress in the field. In which case Professor Bruner will say that it is erroneous or misleading. Hence, today’s discovery somehow accompanies the recognition that yesterday’s views were somewhat erroneous or somewhat misleading. Students who realize that Newton’s theory corrects Galileo’s, and that Einstein’s theory corrects Newton’s, may suspect that Einstein’s too need not be the last word. Yet, if they begin with Einstein, the may become precocious dogmatists. This ends my discussion of Bruner’s Chapter Two. My assertion that to the extent that Bruner’s suggestion is applicable it is a recommendation for the education for dogmatism may want some elaboration, though.
Bruner’s Chapter Three opens with a bold hypothesis. Every subject can be taught to every schoolchild in any stage of development, though admittedly it has to be taught superficially at first.
What is the content of the hypothesis? In one sense of “subject”, this allegedly bold hypothesis is trivially true; and in another sense, it is trivially false. Bruner says it has won ample empirical confirmation; the version of the hypothesis that is empirically testable is neither: it lies somewhere in between. Where exactly? Let us modify “subject” to make it mean a set of problems: physics asks questions concerning weights and temperatures, economics concerning budgets and trade. Already as schoolchildren, we knew many questions from both physics and economics and some answers—mistaken and vague—even before we entered elementary school. We knew how much bubble-gum costs, and its opportunity-cost in terms of ice-cream; we knew that daddy could not afford a Cadillac convertible; we knew that toy-cars fall faster than feathers and even that cold weather can turn water into ice. We even had a few ideas about genetics and hematology, come to think of it; and we were fully-fledged criminologists and experts in space science from having watched cartoons on super-heroes. All this wants no support from Bruner and no confirmation.
Take the second sense of “subject”, then. Up-to-date theories. Bruner uses the words “subject” and “structure” interchangeably. Can we explain the Fermi-Dirac statistics to an average eleven-year-old? No. We can explain to kids some of Mendel’s genetics, but not up-to-date genetics, Pasteur’s ideas, but not the vaguest notion on the latest views concerning the etiology of cancer.
Evidently, Bruner meant the golden mean: for physics, he had in mind neither Aristotle nor Einstein but a smattering of Newton; for geometry neither primitive nor differential geometry but a smattering of Euclid. The smattering that he mentions as what kids can learn is trite. That would not matter, except for the fact that theories such as Euclid’s, not up-to-date in the least, fall under Bruner’s category of false or misleading ones. Chapter Two makes this obvious.
Chapter Three offers examples. They should confirm Bruner’s bold hypothesis. Being false, misleading, and unacceptable as science, they do not; they only show that the knowledge of science that Bruner displays is far from up-to-date. This is no fault, yet it disproves his bold hypothesis.
In the same chapter, Bruner also advocates the spiral curriculum, so called; it is the method of teaching the same subject a few times on different levels of detail, precision. This, again, is correct. It is old hat. To be consistent, however, he should object to levels of precision as imprecision, sine it comprises mistaken or misleading claims. Nothing remains then of Chapter Three—except for Bruner’s admission—to his credit—that any curriculum may be open to revision, pending further research.
Chapter Four advocates intuitive thinking. First, he contrasts the intuitive with the formal. Then he contrast the intuitive with the analytic. Further on, he describes analytic thinking as explicit, be it inductive or deductive. He then describes intuitive thinking as skipping steps with little or no awareness of how this occurs. Intuitive understanding of a given material in class he then contrasts with the more traditional forms of deduction and demonstration. He mentioned explicitly three methods of learning: the analytic, the inductive, and the intuitive.
On a rainy day, having nothing better to do, you may list the ambiguities, incongruities, inconsistencies, and cross-purposes, implicit in the above paragraph. To Bruner’s credit, however, I should stress one point. Chapter Four mentions the existence of problems: it mentions the intuitive as a mode of their solutions.
Bruner adds a warning: intuition has its pitfalls. He stresses that the outcome of any intuition may be false; only if it is true it is analytically provable. He does not suppose that any scientific theory may sooner or later turn out to be false or misleading: this is contrary to his Chapter Two. To his credit, however, when talking about intuition he recognizes that there is no finality in science—for a short while, admittedly, as the outcome of intuition awaits proof to supplement or overthrow it—but nonetheless, and fortunately, he does admit there that science lacks finality.
How does one train for intuition? Encourage students to guess? Their guesses are too often likely to be false. Bruner recommends guided guessing. (This brings him closer to the so-called method of discovery in teaching.) He also notes that intuition grows as a by-product: the more you know the better you guess. Proof: a physician’s tentative diagnosis improves. This proof, let me hazard a guess, is not empirically confirmed; it was never tested. The chapter continues for a few more pages, and, to Bruner’s credit, its two closing paragraphs refer to practical difficulties.
Chapter Five concerns incentives. At the end of his Introduction Bruner says this. Ideally, the best incentive is the student’s own interest, but it is neither possible nor wise to abolish grades and other external incentives. Chapter Three returns to this and admits that students’ inherent interest in the material at hand is a good incentive and grades are a poor substitute for it. Why, then, does he deem unwise to avoid the poor substitutes? Now, exams are relatively new; for millennia, schools had no use for them. Before compulsory education, the end of exams was to qualify professionals, usually clerics; artisans had to produce masterpieces to qualify as masters. The discussion about them raises thus the question, is education for the life of the intellect or preparation for the life of action? The practically minded planners of the curriculum, Bruner observes, want it to serve short-term purposes; the others demand the opposite. Bruner and the whole conference that he has chaired go for a middle-of-the-road proposal. Do these support exams? How do the usual middle-of-the-road exams look like? Do they differ from the traditional ones?
Most of the material in this chapter covers broad topics: American traditions, the crisis in the feeling of national security, meritocracy, and the two cultures. Does all this lead to a reform of the curriculum? If yes, what kind of reform is advisable? With present-day techniques, Bruner admits, arousing kids’ interest sufficiently is impossible: most of what we teach is intrinsically dull. Nevertheless, it is useful: national security and jobs in industry. This may be a recommendation to render all schools vocational. This may leave no time for studying the arts, and no jobs for teachers in the arts. Hence, we must do something against these risks, Bruner notes: we should enlist federal aid for education in the arts, and seek new ways for coping with these risks. This is puzzling. For four chapters Bruner assures us that his new method is applicable to the arts as is his recommendation that kids should be encouraged to try to imitate the style of Henry James. Now he admits that he needs money for research before he can help raise the level of the arts and of literature in school. Odd.
Chapter Six, the finale, discusses teaching aids. These are very good at times, but they do not replace the teacher who must serve as a living example to budding intellectuals—as an intellectual father figure. A teacher is “a model of competence”, “a personal symbol of the educational process”, an “identification figure”. Bruner also admits: unfortunately, some teachers are just terrible. No proposal for improvement.
This, then, is a summary of the content of the volume on how to improve teaching: the most up-to-date and the most general theories should be processed into the standard textbooks of all ages and taught by a semblance of the discovery method while prompting kids to develop their intuitions—by motivating them; by arousing their interest; by promising them high grades. The aid of teaching machines and movies may be useful, but the primary factor is this: we need teachers who can serve as intellectual father figures; they should be morally noble and intellectually armed with the most up-to-date textbooks written by the cleverest people around. Each of these points merits much more research—urgently. Federal Funds please take notice!
>This book is dilettante and confused yet it is distinguished and important. This provides what leading composer and musicologist Roger Sessions has called the inner dynamics of the piece. Scathing reviews are rather hard to write: if the book under review has merit, a scathing review of it is out of place; otherwise, a review of it is redundant. (Unmasking is intellectually cheap.) A reviewer has to consider an intellectually poor volume under review of great significance by some criterion other than intellectual, say, practical. This volume is of a great practical value: it should ring an alarm bell. The longer the alarm bell is silent the more urgent it is to activate it.
An author of a scathing review may show any kind of courage by publishing it but hardly intellectual courage. I surely do not have any intellectual pretense in doing so. The only thing I am adamant about is my hostility to avoidable compulsion. Further, I advocate, beginning at least in high school if not a little earlier, the application of the dialectical method in teaching—raising problems, airing solutions to them, offering criticisms to the solutions. This process begins where the student happens to be and ends when the course ends. The unavoidable end is either the last solution or the last criticism—depending on the present state of knowledge. Beyond this, the process is not instruction but research.
All this raises questions about academic conduct—including academic rituals and academic taboos—and about academic honesty and about whether and how the two can go together. A poor intellectual may get away with it by playing the academic game properly—by following the rituals and saying the silly things that most people believe in already. The system may, reluctantly, denounced the independent and brand them cranks. On rare occasions, this may lead to expulsion or to isolation. The former procedure takes place in the world of free professions and is a very tedious and costly procedure. The latter takes place in the intellectual world. As it is less costly, it applies more frequently and freely. Expulsion is official; its victim is legally barred from practice. Isolation is unofficial, and sometimes operates like an invisible web in a Kafkaesque world: the isolated practice freely but in isolation—and on rare occasions, they interest audiences and become fashionable. They may become fashionable because they are the last of a generation and magnificent grand old masters. As they cease to be dangerous, it becomes the ritual to admire them. Even otherwise, they stand a chance. Intellectually honest young people may find their ideas interesting; they may study and discuss them and examine their worth. Let me end by one more reference to Faraday: he was isolated intellectually but admired as a public performer. He used his lectures to advocate his ideas. Two of his famous Christmas lecture series aimed at children have appeared; if you are interested, you can glance at them and see how they succeeded in carrying out his subversive plan: the revolution in physics that he effected was barely noticed. (Einstein’s praise of Faraday and his expression of debt to him have passed unnoticed.) The Establishment dismissed my book on it, even though it appeared over a century later: fields of force are no longer at issue, but the very idea of them as revolutionary is still too subversive.
I had planned to talk in this section about intellectual courage, but I got tired of the topic after the last section and before the next part. In the next part, I am going to help you learn to act bravely, and I hoped to show you first that you need not fear too much trying out my proposals. I have illustrated this to you, I hope, without much pep talk: if Bruner and his colleagues could get away with what I have described, anyone can get away with almost anything. No kidding: Academe is tolerant. Never underrate this terrific quality.
Faraday discovered that electrolysis can take place even in the absence of water. He claimed that this refutes the view of his late teacher Sir Humphry Day. Dr. John Davy, Sir Humphry’s brother, expressed indignation: Sir Humphry had not said that electrolysis in the absence of water is impossible; he merely observed that he could not find such a phenomenon. Faraday was breaking a very strong rule of the game, and the attack on him was no small matter. Ever since the seventeenth century, the system discouraged researchers from publishing their guesses explicitly—since guesses may be false. When you criticize a researcher who worked by the book you should also work by the book: criticize implicitly. Faraday hated controversy, especially in his early days (he started as a Baconian).
Nevertheless, under heavy fire he published a detailed reply. Had I claimed that water is the only solvent for electrolysis, Dr. Davy would have claimed this idea for his late brother, said Faraday; now that I have proved the contrary, he argues in accord with my finding. It is hard to say where the story ended since it submerged in much bigger issues—Faraday’s heresies caused him isolation as a thinker. He won by a fluke: his cousin was a temporary lecturer replacing a sick professor; one of his students was William Thomson (Lord Kelvin) who later advised Maxwell to study Faraday. The rest is history.
The commonwealth of learning has a network of fluke-hunters—the intellectually honest—the elite whose influence is out of all proportion to its size. Perhaps you are mistaken in your decision to join Academe; but if you do, why not join the very elite: it is at no extra cost. Most academics I know suffer academic agonies—both from a sense of duty and from fear of ostracism. At half the effort and a little more planning, they could join the elite. How, I do not know, but it engages the next part of this volume.
 This very popular expression is confusing. On the face of it, this is the recommendation to report observations and avoid all conjecture. This is how commentators praised empirical researchers. This includes the avoidance of the claim that what one observes is a general fact. Thus, Robert Hooke wrote. “On Saturday morning, April 21, 1667, I first saw a Comet.” This is charming but unusual. When Boyle described the pressure cooker (that he had invented) he went into many irrelevant details that the reader got lost in the details. The reason was that only general facts are scientific, and their assertions are conjectures. These are less likely to err the more qualified they are. Yet we do not know what qualifications are relevant to what observation. When a generalized observation is refuted, then we know ow to restate them in qualification, as Newton said we should do. Thus, the advice to avoid sweeping generalizations may be the advice to report observations naively, and it may mean the advice to state a generalization but avoid explaining them. Boyle said, if you must, you may do so briefly at the end of a paper. The end of the paper of James Watson and Francis Crick on their discovery of DNA says accordingly, “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material”—the understatement of the century.
 Yehuda Fried and JA, Psychiatry as Medicine, 1983, Ch. 1.
 Some commentators on Einstein’s view of method says, he relied on information since he did not rely on intuition; others says, he relied on intuition since he did not rely on information.
 Famous Benedictine, Distinguished University Professor Stanley L. Jaki, criticized me for my having offered no grounds for my rejection of all grounds.
 To please the inductivists my Science and Its History, 2008, documents this claim with heaps of details from the history of the natural sciences.
 See my “The Mystery of the Ravens”, Philosophy of Science, 33, 1966, 395-402, republished in my The Gentle Art of Philosophical Polemics: Selected Reviews and Comments, 1988.
 The sexual metaphor is Kabbalist, popular at the time and alludes to the mediaeval tales of courtly love: stoop to conquer. See my The Very Idea of Modern Science, 2013.
 Mermaids are different: they are born out of the yearning of all-male crews, not of out of ambivalence.
 Ruth Borchard, John Stuart Mill the Man, 1957.
 For another, very detailed and rather boring portrait of an ideological academic bore see my Ludwig Wittgenstein’s Philosophical Investigations: An Attempt at a Critical Rationalist Appraisal, 2018.
 The results of experiments that took place at Western Electric’s factory at Hawthorne, Chicago, in the late 1920s and early 1930s comprise the famous Hawthorne effect, still under dispute.
 L. E. Davis and A. B. Cherns, editors, The Quality of Working Life, 1975.
 Judith Buber Agassi, Evaluation of Approaches in Recent Swedish Work Reforms, 1985.
 John Locke, the most prominent Baconian, compares criticism to the work of clearing the ground prior to building on it―the under-worker’s task. He took this metaphor from Robert Boyle, incidentally, whose research assistant he used to be while he was a student.
 Ben-Ami Scharfstein, The Philosophers: Their Lives and the Nature of Their Thought, 1989.
 See my “Academic Democracy Threatened: The Case of Boston University”, Interchange, 21, 1990, 26-34 and my review of John Silber, Straight Shooting, Interchange, 21, 1990, 80-1, both republished in my The Hazard Called Education Essays, Reviews and Dialogues on Education from Forty-Five Years, Ronald Swartz and Sheldon Richmond, editors, with a Foreword by Ian Winchester, 2014.
 Shaw’s analysis of hypocrisy is penetrating. It conflicts with a vast anti-Marxist literature: wrongdoers are frank when they see no harm in frankness; hypocrisy masks blunder and folly, not ill will. (In my adolescence I became a Marxist because I looked for fiendish schemes behind the hypocrisy of the British administration in Palestine, and Marxism claimed to have exposed such schemes.)
 See Reinhold Niebuhr, Moral Man in Immoral Society, 1932.
 See my Faraday as a Natural Philosopher, 1971, Ch. 3.
 See my “Williams Dodges Agassi’s Criticism”, British Journal for the Philosophy of Science, 29, 1978, 248-52. Reprinted in my The Gentle Art of Philosophical Polemics: Selected Reviews and Comments, 1988.
 See my “Field Theory in De La Rive’s Treatise”, Organon, 11, 1975, 285-301, reprinted in my Science and Its History, 2008.
 Gerald Holton, “Einstein, Michelson, and the ‘Crucial’ Experiment”, Isis, 1969, 60.2: 133-197.
 See my “On Explaining the Trial of Galileo”, review of Arthur Koestler, The Sleepwalkers, Organon, 8, 1971, 138-66. Reprinted in my Science and Society: Studies in the Sociology of Science, 1981.
 This sounds like an exaggeration. So let me note that the Israeli Noble laureate Dan Shechtman had to suffer ridicule before he won the coveted price. Afterwards he described his peers as a pack of wolves.
 Carola Baumgardt, Johannes Kepler: Life and Letters, 1951.
 Erik Erikson, Young Man Luther, 1958, Chapter II, section 2.
 This assertion conflicts with the claim of Adolf Grünbaum that Freud had tested his own theory and that it is successfully refuted. See Jon Mills, (2007), “A response to Grünbaum’s refutation of psychoanalysis”. Psychoanalytic Psychology, 24: 539-544. Popper has claimed that clinging to a false theory―be it Freud’s or Newton’s or any other―renders it unscientific.
 See my review of Paul Feyerabend’s Against Method in Philosophia, 6, 1976, 165-177. Reprinted in my The Gentle Art of Philosophical Polemics: Selected Reviews and Comments, 1988.
 See my “In Search of Rationality”, in P. Levinson, ed., In Pursuit of Truth: Essays in Honor of Karl Popper’s 80th Birthday, 1982, 237-248.
 See my Ludwig Wittgenstein’s Philosophical Investigations An Attempt at a Critical Rationalist Appraisal, 2019, 26, 32, 42.
 See Freud’s Civilization and its Discontents (1930) and his correspondence with Einstein (1931-2).
 See Einstein’s Foreword to Max Jammer’s Concepts of Space, 1954.
 Only Arthur Koestler has criticized Galileo’s polemics, and unjustly. See my “On Explaining the Trial of Galileo”, review of Arthur Koestler, The Sleepwalkers, Organon, 8, 1971, 138-66; reprinted in my Science and Society: Studies in the Sociology of Science, 1981.
 This assertion of mine requires a qualification. In these days of expanded Academe and publication pressure, you may find in the learned press discussions of the most undeserving items.
 Not so. In the eighteenth century, the great Jesuit philosopher Roger Joseph Boscovich argued that elasticity refutes Descartes’ mechanical theory. His criticism is unanswerable.
 Boyle could easily suppress the information that he had adhered to Brahe’s system; he did not.
 See my “Who Discovered Boyle’s law?” reprinted in my Science and Its History: A Reassessment of the Historiography of Science, 2008, and my The Very Idea of Modern Science: Francis Bacon and Robert Boyle, 2013.
 L. Pearce Williams, Michael Faraday: A biography ascribes to faraday the endorsement of Boscovich’s idea. See my “Williams Dodges Agassi’s Criticism”, British Journal for the Philosophy of Science, 29, 1978, 248-52, republished in my The Gentle Art of Philosophical Polemics, 1988.
 For a conspicuous example see Rudolf Carnap, “Scientific Autobiography” and “Reply to Critics”, in Paul A. Schilpp, editor, The Philosophy of Rudolf Carnap, 1963.
 Carl Becker, The Heavenly City of the 18th Century Philosophers, 1932.
 Kant’s deviation from Bacon’s teaching is earlier, but it is implicit. The learned public ignored this. I can see what the poor fellow was driving at, said of him polymath Thomas Young, but I cannot forgive him his obscure style. The English translations of the Critique of Pure Reason improve its language.
 The traditional canons put your assertions in the wrong unless you prove them. Consequently, new untested ideas are often publicized not in scientific press but in conferences―especially in biology, more so in medicine―including the important idea that nucleic acids function as the genetic code.
 Bacon’s plagiarism from an unpublished work of William Gilbert is nasty, especially in view of his sneers at Gilbert.
 For example, Preface to Wealth of Nations, 1776.
 Negligence is the default option when assessing some actions of an expert but not of the rank-and-file.
 The theory of rational degree of belief is the most popular in the current philosophy of science with no discussion of any of the ideas that lead to it and with the mere concentration on the (mis)use of the calculus of probability in it. See my “The Philosophy of Science Today”, in S. Shanker, ed., Routledge History of Philosophy, IX, Philosophy of Science, Logic and Mathematics in the 20th Century, 1996, 235-65.
 His incessant criticism of doctors led to his incarceration in a mental home, where he was beaten to death.
 See the report https://www.timeshighereducation.com/features/ten-steps-to-phd-failure in Times Higher Education, August 27, 2015.
 See my “Max Planck’s Remorse” (review of Brandon R. Brown, Planck: Driven by Vision, Broken by War), Philosophy of the Social Sciences, 47, 351–8.
 Preface to Bacon’s projected collected works.
 See my “The Role of the Philosopher Among the Scientists: Nuisance or Necessary?”, Social Epistemology, 4, 1989, 297-30 and 319.
 Heyerdahl’s diffusionist anthropology still meets with general hostility.
 Einstein, his 1933 Herbert Spencer lecture, Oxford.
 This happens in Galileo’s 1632 Dialogue, Fourth Day: its scientific and artistic value are still ignored.
 Towards an Historiography of Science, 1963, republished in my Science and Its History, 2008.
 To the extent that extant diagnosis is improved, the improvement is almost totally due to improved techniques: https://www.ncbi.nlm.nih.gov/books/NBK338593/. The limits of progress in diagnosis is due to the limitations of the space for improvements. See Nathaniel Laor and JA, Diagnosis: Philosophical and Medical Perspectives, 1990, Ch. 5.
Categories: Books and Book Reviews