You are currently browsing the monthly archive for June 2014.

Philip asked:

Is there a possibility of Metaphysics?

Answer by Geoffrey Klempner

Is metaphysics possible? In this day and age?

One answer would be of course metaphysics is possible because philosophers are doing metaphysics. In any English speaking university you will find courses that include discussion of the so-called ‘problems of metaphysics’.

Just to give you a taste, here is a selection of essay questions for the University of London BA Philosophy Metaphysics paper taken by students on the International Programme:

* Times can be thought of as past, present and future or as earlier and later. Is one of these ways of thinking about time more fundamental than the other?

* What do all red things have in common?

* Can arguments be given to establish that your pen is not a bundle of ideas?

* Can a fully objective view of human beings account for the subjective qualities of mental states?

* ‘A cause has its effects in virtue of its properties. So causation cannot be a relation simply between particulars.’ Discuss.

* ‘If I were to divide into two people tomorrow, neither of the resulting people would be me. But this would not be as bad as death.’ Is this true? If so, why? If not, why not?

* If ‘free choice’ is to be better than something random, must determinism be true?

* Is an army truly a substance?

* Can we intelligibly claim that Sherlock Holmes does not exist?

* Can two objects be in the same place at the same time? Justify your answer.

* Is an object identical with the parts that compose it?

* ‘A stone is a particular, but a stone’s falling is not.’ Discuss.

So we have here the nature and reality of time, the nature of universals, events and substances, identity and spatio-temporal continuity, personal identity, causation and free will, the nature of existence, etc.

All of these topics can be handled by the methods of analytic philosophy – the discipline I was trained in. But I have come to be dissatisfied with this way of approaching metaphysics. Surely the possibility of answering these kinds of question is not put in doubt when one asks, ‘Is metaphysics possible?’ And yet I think it is a legitimate question.

There is another way of thinking about metaphysics. Here is a quote from the Inaugural Address from Hegel’s Lectures History of Philosophy which gives a sense of the kind of thing I am talking about:

“But in the first place, I can ask nothing of you but to bring with you, above all, a trust in science and a trust in yourselves. The love of truth, faith in the power of mind, is the first condition in Philosophy. Man, because he is Mind, should and must deem himself worthy of the highest; he cannot think too highly of the greatness and the power of his mind, and, with this belief, nothing will be so difficult and hard that it will not reveal itself to him. The Being of the universe, at first hidden and concealed, has no power which can offer resistance to the search for knowledge; it has to lay itself open before the seeker – to set before his eyes and give for his enjoyment, its riches and its depths.”

When Hegel talks about ‘science’ he doesn’t mean empirical science. He is talking about metaphysics. As metaphysicians we are after nothing less than the ‘Being of the universe’, the ultimate nature of things. Post Hegel, philosophers have come to doubt that such a thing is possible. And for good reason, you could say. What gives us puny beings the right to think that we can reason out the universe to its very ‘depths’?

I mentioned analytic philosophy, but there are other traditions which can also be seen as a reaction to Hegel’s bullish optimism about the powers of human cognition such as pragmatism, phenomenology, existentialism, each of which embraces human finitude as setting limits to what can be known concerning the ultimate nature of things.

The problem with accepting limits is that we can’t know what our limits ultimately are, any more than Hegel can be sure of the ‘power of mind’ that he talks of. I prefer to keep my options open. I don’t know what is possible, or what is impossible. For that, you need to assume a theory, and I prefer not to make any assumptions. So I will continue my ring quest for metaphysics and see where it takes me.

 

Ryan asked:

What are the various ways in which one could go about trying to demarcate science from pseudoscience?

Answer by Massimo Pigliucci

The distinction between science and pseudoscience constitutes what in philosophy of science is known as the demarcation problem, a term coined by Karl Popper in the early part of the 20th century. Popper was actually interested in solving David Hume’s famous problem of induction. Induction is the general type of reasoning by which we make inferences about things we do not know about on the basis of things we know. For instance, we have seen the sun rise many times before, therefore we reasonably infer that it will do so again tomorrow.

But Hume showed that we unfortunately do not have any solid logical foundation for such inferences. The usual justification for induction is that “it has worked so far” (call it the pragmatic response), but this amounts to say that we believe in induction on the basis of an inductive argument (it worked in the past, ergo it will work in the future), which amounts to deploying a type of circular argument – not exactly a kosher move in logic.

The problem is made more cogent by the fact that a great deal of scientific theorizing uses induction, which is why Popper was so worried. He figured that a possible solution was to move from an inductive to a deductive justification of scientific theorizing. Instead of thinking of science as making progress by inductive generalization (which doesn’t work because no matter how many times a given theory may have been confirmed thus far, it is always possible that new, contrary, data will emerge tomorrow), we should say that science makes progress by conclusively disconfirming theories that are, in fact, wrong. This is Popper’s famous criterion of falsification, which can be formulated as an instance of standard modus tollens in deductive logic: If theory T is true, Then fact F should be observed. Fact F is not observed; therefore theory T is false.

There are several reasons why Popper’s idea of falsification doesn’t actually solve Hume’s problem of induction, which we shall set aside for another time. Popper also thought that falsification could function as the demarcation criterion between science and pseudoscience: if a theory can, in principle, been falsified, then it is scientific; but if there is no way to ever reject it, regardless of what empirical evidence may become available, then it is pseudoscience. Popper thought that Einstein’s theory of relativity is a good example of the first, while Freudian psychoanalysis and Marxist theories of history exemplify the second.

The problem is that there are plenty of pseudoscientific notions that are eminently falsifiable (and have, in fact, been falsified), from astrology to homeopathy. Moreover, there is a history of scientific notions that were initially either unfalsifiable or appeared to be falsified, and yet led to advancements in science. For instance, the original version of the Copernican theory in astronomy (which posited the Sun, instead of the Earth, at the center of the solar system) didn’t do a good job at predicting the actual positions of the planets in the sky. And yet astronomers like Galileo and Kepler kept playing with it, until the latter figured out a relatively minor tweak that solved the problem: Copernicus had assumed that planetary orbits are circular, while they better approximate an ellipsis. Once Kepler introduced the change the theory worked like a charm, so that a Popper-style abandonment after initial falsification would have been unwise.

Because of the problems with the idea of falsification, Larry Laudan published a famous paper in 1983 declaring the demarcation problem dead in the water. He suggested that there is no small set of necessary and jointly sufficient conditions that define “science” or “pseudoscience”; he also claimed that if there were such a set, the only way philosophers could test their classifications of epistemic activities would if they agreed with the judgment of scientists (in which case why not jus ask the latter in the first place); and that the very term “pseudo”-science is problematic on the ground that it pretty obviously implies a negative epistemic judgment, rather than a neutral analysis.

Laudan’s paper was very influential, and did in fact manage to slow down philosophers’ work on demarcation to a trickle. This changed recently because of a collection of essays on the subject published by the University of Chicago Press (and which I co-edited with Maarten Boudry, from the University of Ghent, in Belgium). The book begins with several (belated) replies to Laudan, and then proceeds to explore a number of philosophical, historical, and sociological issues surrounding demarcation.

With respect to Laudan’s first point, several authors have pointed out that just because it is not possible to define science or pseudoscience precisely, it doesn’t follow that the two concepts aren’t meaningful and useful. Wittgenstein famously introduced the idea that a number of complex ideas share a “family resemblance,” meaning that – just like in the case of the members of a biological family – one can see that different instantiations of a concept are related to each other by various threads, even though there is no single or small set of criteria that definitely rule individual instances in or out. Wittgenstein argued that even an apparently straightforward concept like that of “game” is actually difficult to pin down, because for any group of criteria one may propose to define it (“has rules,” “it is done for fun,” “there are winners and losers”) one can easily find either games that do not fit all the criteria, or non-games that fit a number of them. So, a better way to think of science and pseudoscience is as two peaks on a continuous landscape of epistemic activities, some of which are more (or less) scientific (or pseudoscientific) than others.

As far as philosophers having to agree with whatever judgment scientists come up with (Laudan’s second point), the heck with that! Some of the most interesting criticisms of science itself have come from philosophers (e.g., about claims made by evolutionary psychology, or in the name of string theory), so it is far more constructive and interesting to see scientists and philosophers engage in a continuous dialogue about these matters, without either having to simply defer to the other.

Finally, yes, of course the term “pseudo”-science carries a negative connotation (Laudan’s third point). That is on purpose: to put a warning label on an epistemically deficient activity that only apes the trappings of science without actually being a science. And philosophers have always been in the prescriptive, not descriptive, business, so it is okay for us to deploy critical terminology, as long as the deployment is warranted by a detailed analysis.

 

Answer by Craig Skinner

There was lively debate about this in the 20th century. The upshot was that there is no clearcut demarcation. It’s all gone quiet now.

Suggested criteria for science were:

* falsifiability (Popper).
* puzzle-solving (Kuhn).
* progressive research programme (Lakatos).

Falsifiability: Karl Popper was impressed by how Einstein’s Theory of Gravity offered itself up for possible falsification by predicting something unexpected and testable (light bending around the sun: observations during the 1919 solar eclipse found for Einstein and against Newton) whereas Freudian psychoanalysts or Marxists could explain away any observations without giving up their theories. He suggested falsifiability as the mark of science. Scientists liked it – they were portrayed as heroic figures willing to let their beloved hypothesis be slain by a single awkward fact. But real scientists are different: they hang on to their hypotheses like grim death, blaming auxiliary hypotheses for the apparent falsification (the experimenter missed a confounding factor which affected the outcome; the blood-sugar machine was faulty: etc). Also, strictly, no hypothesis can be falsified: any observation or experiment necessarily tests several hypotheses at once, and we can always say one of the auxiliary ones is wrong, not the main one we are testing.

Puzzle-solving: Thomas Kuhn pointed out the drawbacks of falsifiability, said that scientists rarely even tested their main (paradigmatic) hypotheses, but solved puzzles within paradigm. The hallmark of science was systematic, progressive, puzzle-solving.

Progressive research programme: Imre Lakatos said that scientists (as opposed to, say, astrologers) do research, testing their views against the empirical world, and expect to make progress, discovering new things, reaching better understanding, refining and amending hypotheses.

In a famous 1981 USA court ruling that “creation science” was religion, not science (and so didn’t merit equal school classroom time with evolution), Judge Overton gave the criteria for science as:

* explanation by natural law.
* views testable against the empirical world.
* views held tentatively, not dogmatically.
* views falsifiable.

Creation science has changed its name to Intelligent Design, claims puzzle-solving and research activity, and battles on against “Darwinism”.

Ultimately demarcation depends on detailed understanding of how science works, but even here, scientists differ as to whether some views are science or not e.g. string theory, multiverse hypotheses.

Finally, if you pick up an alleged popular science book, suspect pseudoscience or a religious agenda if the blurb, review or text includes the following words, phrases or links:

* “scientific materialism”.
* “irreducibly complex”.
* “astonishingly complex molecular machines”.
* “academic freedom” (code for acceptance of creationism).
* “Darwinists/Darwinism” (real scientists usually say “biologists/evolution”).
* “blind, random, undirected” (referring to evolution).
* link between quantum physics and freewill
* link between Darwin and the Holocaust

 

Jeff asked:

What is holism?

Answer by Massimo Pigliucci

The term holism refers to a variety of concepts, from the idea (in medicine) that the body should be treated as a whole, rather that by focusing on its individual parts, to the rather vague concept (in New Age mysticism) that all things in the universe are somehow connected. In philosophy and the natural sciences (particularly biology), though, holism is best contrasted with reductionism, so perhaps it would be better to start with a brief analysis of the latter.

Reduction is a technique in philosophy – and by extension in the natural sciences – that was formalized by Descartes. In The Meditations, he suggested that we need to approach any given problem by the method of “divid[ing] each of the difficulties under examination into as many parts as possible, and as might be necessary for its adequate solution,” or we “reduce involved and obscure propositions step by step to those that are simpler, and then starting with the intuitive apprehension of all those that are absolutely simple, attempt to ascend to the knowledge of all others by precisely similar steps.”

The Cartesian method – of which the above is a crucial part – was adopted by other natural philosophers, such as Descartes’ contemporary, Galileo, and became an intrinsic component of the successes of physics and other natural sciences. In philosophy, the approach eventually evolved into the method of analysis used in the appropriately termed “analytical philosophy.” It allows the translation of common language sentences into logically coherent and “cleaner” versions (if you are interested, this refers to Bertrand Russell’s theory of descriptions).

There are different kinds of reductionism, only some of which may be usefully contrasted with holism. For instance, in philosophy of science one can talk about theory reduction in cases in which a higher level scientific theory (say, Mendelian genetics) can be “reduced” (i.e., reformulated) in terms of a lower-level theory (say, molecular genetics). There is no holistic counter to this type of reduction: either a theory is successfully reduced to another, or it isn’t.

More interestingly, we can distinguish between ontological and epistemic reduction, both of which can be contrasted with their holistic counterparts. Let’s consider an example to fix our ideas: imagine you meet a physicist at a cocktail party and he tells you that everything that happens in the universe reduces, at bottom, to physics. What could he mean by that? One way to interpret the claim is that the physicist is simply saying that, ultimately, everything is made of subatomic particles (or strings, or whatever the latest physics comes up with to identify the basic constituents of reality). This is certainly a reductive explanation, and it is true, as far as it goes. It also represents a case of ontological reductionism, because it says that the only things that exist are subatomic particles (or strings, or whatever). (Remember that ontology is the branch of metaphysics concerned with claims of existence.)

But now consider another possible meaning of the utterance made by our cocktail party physicist: that physics is the only relevant science because everything in the universe can be explained in physical terms. This claim is epistemological (epistemology is the branch of philosophy that deals with how we know things), and much more debatable. Even if it is true that the ontology of everything (human beings, economies, galaxies) is reducible to fundamental physics, it doesn’t follow that fundamental physics can do away with biology, psychology, economics or cosmology as independent disciplines with their own proper explanatory levels. Unless your physicist friend is ready to provide you with, say, a quantum mechanical explanation of why you two are having that particular conversation, his epistemic reductive claim fails abysmally.

Now, a holist could take issue with both the ontological and the epistemic claim, but she would probably be more successful with the latter than with the former. It is hard to imagine, given the current status of scientific knowledge, denying that everything is, in fact, made of subatomic particles (or strings, or whatever). While at the same time it is rather easy to make the case that there are many levels of complexity in the world (atoms, molecules, living organisms, ecosystems, celestial bodies), and that different types of explanations work best for distinct levels of reality: quantum mechanics isn’t going to replace economic theory, likely ever.

However, even in the ontological sense, reductionism isn’t necessarily a slam dunk. A number of philosophers have suggested that certain kinds of non material “objects,” such as mathematical structures (numbers, theorems, etc.) exist in a mind-independent, non physical fashion, and are therefore not reducible to subatomic particles (or strings, or whatever). This notion is known as mathematical Platonism, but we’ll set it aside for another time.

There is one more sense in which ontological reductionism may turn out to be problematic – and hence a holistic, or system-level, approach to be useful – though it is still somewhat speculative. I am talking about the idea of emergent properties. These are properties of complex matter that are not reducible to the properties of its simpler constituents. Take, for instance, the fact that a large enough number of molecules of water acquires the property of being wet (at certain temperatures and pressures). “Wetness” is not defined at the level of an individual molecule, or even of a small number of molecules. It only emerges when enough molecules interact together, which means that it can be studied only holistically, so to speak, not reductionistically.

Generally speaking, it is more constructive to think of the holism-reductionism contrast as a complementarity rather than an antagonism: on the one hand, we can often make progress in understanding complex systems by breaking them down into smaller chunks and see how they can be put together again; on the other hand, we still need to take seriously the idea that some things only function, or even exist, at certain levels of complexity and not below them.

 

kate asked:

What is relevant data that supports the inferences about “Do we use 10 per cent of our brain?”

Answer by Massimo Pigliucci

The claim is simply false, amounting to no more than an urban legend. In reality, it is pretty clear that we use the entirety of our brain, though not necessarily all at the same time. There are several empirical lines of evidence that refute the 10% idea. To begin with, it would be very strange if natural selection had created an organ that consumes a whopping 20% of our metabolism (while accounting for only 2% of our body mass), and 90% of it went unused. Such waste should have massive consequences on an individual’s viability and reproductive competitiveness, and would have therefore been quickly eliminated during the early stages of human evolution.

If you don’t find a priori arguments too convincing, then consider that when brain cells become inactive they degenerate, and even superficial scans or living brains (or autopsies of dead ones) clearly show that not to be the case. Speaking of brain scans, nowadays we can do fMRI and similar investigations while subjects are actually using their brains on a variety of tasks, and such techniques clearly show large portions of the human brain to be active at any given time. Moreover, scientists have studied a number of people with extensive brain injuries, and invariably these people suffer from reduced cognitive functionality. If 90% of their brains were normally untapped one would expect to see no impairment following even extensive brain damage. Barry Gordon, a neurologist at the Johns Hopkins School of Medicine in Baltimore, interviewed by Scientific American, said that the very idea that we use only 10% of our brains is “so wrong it is almost laughable.”

Where, then, does this idea come from, and why does it persist? Different scenarios have been proposed for the origin of the 10% myth, and they may all be correct, since it is possible that the idea has popped up independently a number of times. One of the best documented stories traces it back to philosopher and pioneer psychologist William James, who carried out research on the unrealized potential of human IQ back in the 1890s. James concluded – on the basis of his study of a child prodigy – that ordinarily we may achieve a fraction of our mental potential, which may be a defensible claim as far as it goes. However, in 1936 the American writer Lowell Thomas referred to this idea in a foreword he wrote for How to Win Friends and Influence People, by Dale Carnegie, falsely stating that “Professor William James of Harvard used to say that the average man develops only ten per cent of his latent mental ability.”

Another plausible root for the origin – or perhaps persistence – of the myth is the well known fact (since the early part of the 20th century) that only about 10% of brain cells are actually neurons, the remainder being made of glial cells, which play crucial supportive and protective roles for the neurons themselves.

Ezequiel Morsella in Psychology Today describes yet another possible origin story for the 10% myth, this one tracing it to more recent research conducted by American neurosurgeon Wilder Penfield. Penfield developed techniques to treat severe forms of epilepsy and was interested in exploring the possible damage that a surgical mistake may cause in the patient. He therefore embarked in research that involved the localized stimulation of areas of the brain with electrodes, while the subject was awake and could report the effects. Penfield and his colleagues discovered that stimulating a small number of areas of the brain (accounting for about 10% of the total) was sufficient to generate detectable effects. Needless to say, this is not at all the same as saying that we only use that percentage of our grey matter.

The 10% myth is popular both among New Age believers and paranormalists, as well as in the entertainment industry. The suggestion has been made that paranormal phenomena, like telepathy, clairvoyance and telekinesis (for which there is no convincing evidence) become possible when one somehow “taps” into the normally (allegedly) non-functioning 90% of one’s brain (see this article by Ben Radford in Skeptical Inquirer). New Age believers explain the (again, alleged) existence of psychic powers in the same fashion.

A number of movies and television shows have been loosely based on the premise that the 10% myth is true, including the pilot episode of Heroes and the movies Defending Your Life (1991), The Lawnmower Man (1992), Limitless (2011), and Lucy (2014). The 10% use is not the only widely accepted but incorrect notion about the human brain, this article by Laura Helmuth in Smithsonian magazine lists another nine.

 

David Connery asked:

If the universe was created in a Big Bang, before light, matter, and time; if there was no time how can there be a before? If there is no matter how can any reactions, chemical or physical or other, occur? It is impossible to make something with nothing. Is our universe just one of many, in a cycle created out of the death of another? Do you think the Big Bang was part of another cosmic event, i.e. the creation or death of other unknown universe(s)?

Answer by Massimo Pigliucci

The question of what, if anything, was before the Big Bang – or even, as you point out, whether it makes any sense to talk about “before” and “anything” in this case – is one that has vexed both philosophers and physicists for a long time. Recently, for instance, cosmologist Lawrence Krauss has written an entire book dedicated to the topic, aptly entitled A Universe from Nothing: Why There Is Something Rather than Nothing. Krauss thinks that physics is very close to answering that question, as the available empirical and theoretical evidence points to the idea that the universe came out of an essentially featureless quantum vacuum.

Some philosophers, however, think this is too quick. For instance, David Albert, in a critical review of Krauss’ book that appeared in The New York Times, correctly points out that even a quantum vacuum is not “nothing,” and that it does have “features,” at the very least because it behaves according to the laws of quantum mechanics, which is what Krauss helps himself to when he says that physics can explain how the universe came about. Although the specific diatribe between Krauss and Albert is not the main point here, it has to be noted that subsequently Krauss backed away from his initial position a bit, stating (somewhat disingenuously) in an interview with Atlantic magazine that “Well, if that hook [talking about “a universe from nothing”] gets you into the book that’s great. But in all seriousness, I never make that claim.” Regardless, I think it is fair to say that physics is getting closer and closer to understanding how the universe came about, but we are not there yet, and there is a good chance we might never get a complete answer.

Why not? Well, to begin with because there simply may not be enough of what philosophers call “historical traces” left for scientists to work with. Science is an empirical discipline, and whatever theory scientists come up with has to square with the empirical evidence. But we may not be able to recover any evidence at all about whatever was “there” “before” the universe began. It all got wiped out by the Big Bang itself.

Notice that I put “there” and “before” in scare quotes, because you are right that it is at the least a bit problematic on the one hand to say that time and space began with the Big Bang, and on the other hand to ask what was there before that cosmic inception. Nonetheless, I do not think this is an insuperable problem, as physicists have different options available. They could simply say that the meanings of concepts like time and space change (in specifiable ways) before and after the Big Bang, but that we may use the same words anyway for convenience purposes. Or they could say that only local (i.e., of this specific universe) time and space began with the Big Bang, but that a broader conception of those quantities applied even before to the whole multiverse.

Which brings me to your other questions: are there are other universes, and is it possible that ours originated from one of those? The idea that our universe is really part of a much bigger entity, referred to as the multiverse, has been discussed a lot in physics and philosophy of late. (And it is not to be confused with the so-called many worlds interpretation of quantum mechanics, about which you can find a lot more here.)

Physicist Lee Smolin, for instance, has proposed a so-called theory of cosmic natural selection, a summary (and criticism) of which you can find in this article of mine at Rationally Speaking. The basic idea is that collapsing black holes spawn baby universes, and these newly formed universes are characterized by combinations of physical parameters similar but not identical to the one of the “mother” universe. The “selection” bit of the theory comes in because some universes are going to be more stable than others, and hence will be more likely to “survive” longer, and in turn to spawn more baby universes.

Notice the language that Smolin imports straight from the Darwinian (biological) theory of natural selection. But there are important disanalogies between the two, which is why Smolin’s theory is far from being accepted. In biology, selection is triggered by competition of organisms for a common set of limited resources; it is not clear in what sense, if any, universes compete for resources with each other. Also, the theory of biological evolution requires a mechanism of inheritance (genetic transmission), which makes possible the creation of new generations inheriting characteristics of the old ones. No similar mechanism is known in black holes physics.

Finally, there is the issue that – so far at least – not only nobody has observed any other universe (other than our own) within the multiverse, but it isn’t even clear whether such observation, even indirectly, is in principle possible. If it isn’t, then the idea of a multiverse is bound to remain mathematical speculation, i.e., basically metaphysics, not science.

 

Wil asked:

Please tell me, are objectivity, rationality and universality necessary requirements for all philosophical truths? Are they even possible?

Answer by Massimo Pigliucci

Let’s start from the very idea of philosophical truths. I’m not sure philosophy is in the business of discovering truths in the first place. Certainly not in the sense of discovering things about the natural world or human behavior – we’ve got the natural and social sciences for those tasks already, and they are doing a pretty good job of it.

Philosophy may rather be thought to be in the business of discovering, say, metaphysical or ethical truths, and certainly a number of philosophers would say that this is indeed the case. But I’m skeptical. I don’t think there are objective moral truths “out there,” and I think that morality is a human invention. This doesn’t mean it is arbitrary (after all, its purpose is to regulate human social interactions in a way that is fair and conducive to individual flourishing), and it doesn’t mean that moral philosophy can be reduced to social science (facts about human nature are relevant to ethical reasoning, but they under-determine it, meaning that there are multiple solutions to moral dilemma given certain facts on the ground).

As for metaphysics, its usefulness, I think, lies in clarifying concepts like personal identity and time (see here, for instance), say, and also in producing a coherent picture of how the sciences (fundamental physics, biology, economics, and so on) describe the world. But simply thinking about the way the universe is made isn’t going to reveal any new truths about it – again, that seems to be the job of science at this point.

So, all in all I see the tasks of philosophy as clarifying concepts, exploring their logical consequences, building and examining arguments about ethics and metaphysics, and even analyzing the warrant of scientific claims (that would be philosophy of science). But I don’t think it is useful to construe any of the above as seeking “truths.”

We can now go back to your actual questions, beginning with universality. I doubt it is necessary for philosophical inquiry. While certain areas of philosophy do concern themselves with universal statements (e.g., modus ponens in logic is valid regardless of specific arguments in which it is deployed), many others don’t. Take political philosophy, for instance. When, say, John Rawls wrote his book on how to construct a just society he understood that this is a human concern, and that our concept of justice may or may not apply to other sentient species, or in other places of the galaxy. Indeed, I would argue that unless we are talking about a specific type of social biological organisms endowed with self-consciousness, certain goals and desires, the very concept of justice makes no sense. (Can you be just or unjust toward a rock? An amoeba?) So clearly justice, a fundamentally philosophical concept, is not universal.

I’m going to address objectivity next. Mounting research in social science, as well as scholarship in philosophy of science, seem to agree that objectivity is only an ideal goal, which can be approximated by groups, but which is not a characteristic of individuals. Consider, for instance, science itself, the paragon of an objective human activity. It is pretty obvious from the psychology and sociology of science that individual scientists are actually no more objective than most people. They care about their theories, which means that they are (at the least unconsciously) partial to them; and they adopt (and sometimes vehemently defend) specific points of view about all sorts of things, just like the rest of us. However, the scientific enterprise as a whole approaches a high degree of objectivity because science is a social activity that puts a high premium on truth and verifiability, and where young scientists have huge career incentives in showing that a previously held notion is in fact false. For a more detailed treatment of this approach to objectivity in science, see Ronald Giere’s book,Scientific Perspectivism, or Helen Longino’s The Social Dimensions of Scientific Knowledge.

Finally, rationality. Yes, it seems to me that rationality is required for philosophical discourse. After all, one can define philosophy as a type of rational inquiry, similar to science, mathematics and logic itself. But of course there are different conceptions of rationality, and there is certainly no guarantee that individual philosophers are as rational as one might hope.

With respect to the first issue, for instance, recent research in logic has suggested that there are ways to rigorously account for apparent contradictions and logical paradoxes, ways that are not compatible with standard approaches to logic and rationality. For instance, my CUNY colleague Graham Priest has written about how so-called paraconsistent logic may help us make sense of seemingly contradictory statements from the Buddhist tradition. I’m not sure that I agree with Graham about Buddhism, or in fact even that I buy wholesale into the idea of paraconsistent logic, but that goes to show you that there is (reasonable) disagreement about what counts as rationality – to a point (there is quite a lot that logicians agree on).

Concerning the second issue, the rationality of individual philosophers, I think training in philosophy certainly does refine one’s skills in logic and rational argument – indeed, that may be the chief reason to major in philosophy, or at the least to take philosophy courses. But, ultimately, what brings about rationality in philosophical discourse is similar to what allows for quasi-objectivity in scientific discourse: philosophy is a social enterprise, and you can bet that as soon as a philosopher says something that is not quite rationally defensible a number of other philosophers will jump on it with gusto and tear the poor (argument, not the philosopher) apart. And that’s the fun of actually doing philosophy.

 

Philosophizer by Geoffrey Klempner

'Philosophizer' by Geoffrey Klempner

Calendar

June 2014
M T W T F S S
« May   Jul »
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Ask a Philosopher home page

'Zombie with qualia' by Glyn Hughes
counter for wordpress