You are currently browsing the monthly archive for January 2016.
Hello, this question is about things that have happened in the past, experiences we have had and how we think about them.
Does it make any sense to say this: Regardless of whether or not humans have free will, regardless of anything else, the universe unfolded the way it did until this second and it could not have happened any other way, therefore it is pointless and unhelpful to regret the bad experiences we have had.
Someone makes a decision that turns out to be a bad one, and you can say well if you made a better decision of course the universe would have unfolded differently and you would have had a better experience, a better life, but is that true? Is it possible to argue that whatever has happened, good or bad, had to happen because it actually did happen, and that’s all the proof you need. It doesn’t matter if the Determinists or Libertarians are right or wrong, it has nothing to do with a belief in fate, something happened in the past therefore it had to happen. The world wars happened, now we can look back with hindsight and see how they could have been avoided, but when you take into account all of the many and varied factors at the time that contributed to them isn’t it possible to say that they had to happen, how couldn’t they?
Answer by Jürgen Lawrenz
Let me recommend to you first, that you stop your indiscriminate jumbling of ‘the universe’ and ‘someone’. These things are not compatible. The universe is a big place and ‘someone’ a pretty puny speck of dust in the middle of nowhere. So let’s leave the universe alone, as none of us specks of dust genuinely know what is the case with it, and stick to what we can reasonably know in the context of your question.
Then the first item to be considered is a differentiation between things or events that are necessary, and those that are contingent. Now as you are speaking of humans, you are speaking of contingent things. Every act of every person is contingent, as ultimately only death removes choice (I’m not speaking of ethical or survival choices here). Therefore your claim that ‘it could not have happened any other way’ is false. If you had not written your question, I would not have written this reply. If you wish to claim that you were forced to write it, I don’t believe you. You weren’t dead when you wrote. QED.
Although this is ‘the short answer’, I don’t feel the necessity for a longer one, since only death terminates the process of learning. I believe you are confusing the unwillingness of humans to learn ‘better decisions’ with something else — maybe with our instinctual estate which frequently dominates our decisions and clouds our judgement. So the fact that something happened in the past is no argument for or against either determinism nor libertarianism. It is simply the contingent outcome of contingent occurrences; and each of these was a choice or a clutch of choices at the time.
For example, once you are in possession of certain facts, you would not fall for the easy trap you laid for yourself concerning the World Wars. Thus on the day before the outbreak of World War I, the Austrian High Command received three conflicting instructions from three government instrumentalities, two of which supported them in their intention of going to war, while one refused the support. The Austrians chose (repeat: chose) the supportive advice, whereupon they left the German government with egg on their collective faces. And now, in this contingency, the Germans felt they could not back out of supporting the Austrians who were already mobilising, without looking like idiots. So you can see in this example plenty of choices on both sides that were certainly not forced on anyone without a door standing open to retreat!
But I also want to give an example of choices which overturned the legal murder of hundreds of women in our civilisation, not all that long ago. These women were called ‘witches’ and supposed to be consorting with the devil. But when Descartes’ mechanistic philosophy gained its dominant standing in the latter half of the 17th century, it was gradually recognised that the commonly believed practises of witches (i.e. riding on a broomstick through the air) were physically impossible. In the outcome, by the choice of relevant authorities, witch burning was phased out quickly, after it had reached its unsalutary peak in Descartes’ own lifetime.
I hope you can see now that hammering away at the contingent facts of history as if they were pre-determined, is not a conclusion, but a fallacy.
I’m interested in the implications of Arthur C Clarke’s third law ‘any sufficiently advanced technology is indistinguishable from magic’. Do you know of any philosophers who have addressed this? Can you recommend some reading? Thanks for your time.
Answer by Craig Skinner
Clarke’s laws appeared in his 1960s/ 1970s writings, although the third law appears in others’ writing at least as far back as 1866 (Rider Haggard).
Few people in our present culture explain things by magic. Even small children know that texting, drones, Google and so forth are wonders of science not magic. But no doubt if these were shown to a past (or present) culture prone to magical or supernatural explanations, the latter would be invoked to account for the marvels.
The key live philosophical discussion on advanced technology is the possible explosive increase in artificial intelligence (AI) and alleged risk of humanity being eclipsed. The story goes as follows. AI is steadily improving. Soon (years or decades) human level intelligence will be reached. This will be a no-way-back ‘singularity’ because these AIs will create even smarter robots, which, in turn, will produce yet smarter ones, quickly leaving homo sapiens behind, so that the activities of these superintelligences will be as opaque to us as quantum mechanics is to my dog, and they may have no place for us in their worldview, and eliminate us.
Discussion centres on whether this is possible, whether we could still be in control, whether we need to be proactive, whether morality could be built in to AI, whether the singularity has already happened and we are all simulations in an advanced AI’s virtual world.
Now to recommended reading. Much is not easy going, but it’s all great fun.
Recent debate was kick-started by David Chalmers’ 2010 Journal of Consciousness Studies (JCS) article ‘The Singularity, a Philosophical Analysis’ (I think it’s available online). This was followed by two 2012 JCS issues (Vol 19, No.1-2 and No.7-8) with numerous responses to Chalmers including those of Dan Dennett, Barry Dainton, Susan Blackmore, Susan Greenfield, Frank Tipler, Igor Aleksander, Ray Kurzweil, and Nick Bostrom. The latter has followed up with a terrific book, Superintelligence: Paths, Dangers, Strategies (OUP 2014).
What is the point of the Sorites paradox? I’m a regular listener of the Rationally Speaking podcast, and couldn’t help but notice that Julia Galef concludes ‘that philosophers think there should be a precise definition or a right answer’. I’m of the opinion that the point of the thought experiment is to help us realize the ‘messiness’ of language. Which of us is closer to the truth?
Answer by Jürgen Lawrenz
I wish I could help you with your first sentence. I can’t figure it out either, nor can I think of the slightest use of arguing such questions. One way of resolving the issue would be to look at the languages we speak and just acknowledge that they are full of paradoxical cliches for the simple reason, that they reflect the experience of their speakers over many generations; and you can be sure that none of them (at the primitive forefather stage) was ever concerned with asking ‘when is a heap not a heap anymore?’, or ‘when does blue paint shade into green as I keep adding drops of green paint?’ So you are certainly on track with your surmise of ‘messiness’.
I think this is the kind of imprecision that stings logicians like a nail in the toe. They can’t cope with the messiness of language use. The quote you offer from Julia Galef is plainly an opinion from that stable. Efforts to improve the logical structure of language have been going on for almost 200 years and all failed. I suspect the reason has to do with the fact that we learn our language as babies. Terence Deacon in his book The Symbolic Species offers the theory (based on years of studying in situ the emergence of creole speech at the intersection of two or more languages) that such new languages are created by children. So it seems that philosophers are not the right people to ‘fix up’ the spontaneous language behaviour of humans!
It seems to me that another issue plays into this problem. Our intuitive (spontaneous) apperception of a plurality is restricted by what our eyes can take in at one glance (and to some extent what our fingers can feel and our ears can separate among sounds). When you look at the stars, you’re looking at a heap. Stargazers over the millennia got around to ordering the stars into small heaps, like the Pleiades (seven stars). They are not really a group; but they form a heap that can be grasped instantly. This is because our eyes spontaneously group pluralities into small geometrical patterns up to 12. And now, when you look at our ancient number systems based on 12, you can discern in it a clear build up of clean geometrical patterns based on 2 and 3. With 13 and higher, intuition begins to wobble uncertainly and we start sensing heaps!
I guess this leaves the opposite issue unresolved, which perhaps should attract equal attention from logicians. When does a heap become a hump, a mound, a dune, a hill, a mountain? Can we have some quantifying precision to these expressions as well, please, while you’re at it?
What is the main argument on Bertrand Russell, ‘Appearance and Reality’? Explain why the argument is good (valid/ strong, sound/ cogent) or bad (invalid/ weak, unsound/ uncogent).
Answer by Danny Krämer
Philosophy is often a rebellion of thought. You find traditions that make no sense to you and you meet people whose arguments you just think are false. Often you even rebel against your own former beliefs. When Russell came to Camebridge British philosophy was idealist philosophy inspired by Hegel. One of the well known British followers of Hegel was F.H. Bradley. For these idealists the connection between beliefs and not the connection between thoughts and the world were important for truth. So they advocated a kind of coherentism. Everything that can be thought is a thought of someone and therefore everything that exists is either a thought of a universal mind, as in idealism, or it consists of many monadic minds, as Leibniz suggests. Russell, together with G.E. Moore, was one of the leading figures in the rebellion against British idealism.
The interesting thing in the first chapter of Russell’s Problems of Philosophy called ‘Appearance and Reality’ is his Cartesian starting point. He asks: Can we get a foundation for our knowledge that is so certain, that we can build our whole system of knowledge upon it? That is clearly a question that Bradley would reject. His form of idealist coherentism had no place for a foundation of any knowledge. Russell makes an empiricist point: In everyday life we have no problem in making true statements. We just observe state of affairs and then we know for example that the tomato is red. But he cautions us about any crude and naive empiricism. We often know that the things are not at all how they appear.
Take Russell’s argument from the relativity of perspectives:
Five persons stand around a table.
Every of this persons sees a little bit different colour shade and form of the table.
With some manipulation of the light even all five could see a different colour altogether.
Therefore there is not THE colour of the table.
What we call the colour of the table is just the colour of the table under normal circumstances to a normal observer. So it seams that the table itself has no intrinsic colour. The colour of the table is a relational property between the table, the environment and the observer. Russell suggests a ontology that can preserve the difference between appearance and reality. There are, he says, not only objects like tables but also sense-data. These are the things that are immediately known to us by sensation, like colours, sounds, smells etc. So the colour of the table is not a property of the table but a sense-datum.
Is the argument sound? I think it is as far as it goes but I don’t like Russell’s ontology of sense-data. The problem with sense-data as mental objects is that scepticism reigns supreme. If material objects just cause some mental objects which we perceive directly then this mental objects could be caused by a table or by a supercomputer to a brain in a vat. If we understand colours as relational and very complex properties between the observer, the object and the environment and not as sense-data i.e. mental objects between us and the object, we get a externalist and functionalist understanding of mental properties.
But Russell opened up the discussion for a new style of philosophy that was not idealistic. Pace the British idealists and some followers of Leibniz, there are material objects in the full sense of material. They are independent of any thoughts. Russell so clears the path for questions about meaning and reference, semantics, representation and so on, which are still with us.
A possible argument that a computer running an algorithm cannot be conscious?
Imagine, to the contrary, that a computer could experience a moment of subjective awareness by running some program code. Let us put that code inside an infinite loop and set the program running with a counter that increments with every iteration of the loop. In principle the code runs a counter infinity number of times and the computer experiences an infinite number of identical moments of consciousness.
Now imagine the computer ‘waking up’ in one of these moments of consciousness. It asks itself the question ‘what is the prior probability that I should find myself in a particular conscious moment with some definite counter number n’ As it knows that it will run forever then the prior probability of finding itself in this moment n is 1/infinity which is zero. But this reasoning is true for all n so that the probability of finding itself in any moment is zero. This contradicts our assumption that the computer does find itself conscious.
Perhaps a computer running a program cannot produce conscious awareness.
Answer by Craig Skinner
Consciousness occurs naturally in humans and some other animals, so that it seems to me it should be possible to produce it in a sophisticated enough artefact. Maybe an embodied, enactive computer, embedded in and learning from its environment (rather like you or me) as opposed to a box on the floor running software. I suspect it’s only a matter of time.
To turn now to your argument, a reductio ad absurdum whereby you prove something by showing that assuming the contrary leads to a contradiction.
First, I find it confusing, and will say why. Secondly, even if we allow the confusing bit to go through, the argument about probability and infinity is flawed.
As regards the confusion, you start by assuming the computer experiences subjective awareness by running a program. Well, then it’s conscious. Why the need for it’s ‘waking up’.
The flaw. You say the probability of any particular moment being selected is 1/infinity (zero). But the rules of probability only apply in this way to finite sets. Here’s why. Selection of any particular member from a set of s is 1/s only if each member has the same probability of selection.
For example, if a number is to be randomly chosen from 1-100, the chance of it’s being in the range 1-50 must be the same as the chance of it’s being in the range 51-100. But this can’t happen with the (countable) infinite set of natural numbers. Because any number you specify, however big, is always in the ‘lower half’ of the range with an infinity of numbers larger than it. It’s impossible to randomly choose a number from this infinite set. Of course you can still choose a number non-randomly, and we often do.
In misapplying probability to infinity, you are in distinguished company. The famous philosopher of science, Karl Popper, did the same. He didn’t like Bayesian analysis, and deplored it’s growing popularity in science. He sought to undermine it’s application to theorem choice in light of evidence. His argument was:
1. There is an infinity of theories compatible with any body of evidence (this is strictly true, Duhem’s thesis).
2. Prior to any evidence, we shouldn’t consider any theory more likely than another (he called this the Principle of Indifference).
3. Hence every theory must get equal prior probability.
4. For an infinity of theories, this probability can only be zero, since any finite probability, however small, would make the total probability infinite, and total probability can’t exceed one.
5. Hence the prior probability of any theory is zero.
6. Hence the posterior probability (after new evidence) of any theory remains zero, since zero multiplied by any number remains zero.
7. Hence Bayesian analysis never gets started and is useless.
The argument is valid. But unsound because 2. is false, and, just as in your argument, leads to the false conclusion in 4. We needn’t, and shouldn’t, consider every theory equally likely. Some are more plausible and deserve higher prior probabilities than others. It would be absurd, for example, in assessing theories of why things fall to the ground, to give the same probability to the theory of free fall in curved spacetime (Einstein’s theory) as to alternatives such as the theory that four elves pull a thing down by invisible string, or five elves by invisible string, or four elves by invisible rubber bands. etc etc. And of course, once we abandon the requirement that every choice gets equal probability, we can easily assign a finite probability to every one of an infinite collection without total probability exceeding one. A simple assignment is prob1/2 to theory 1, prob1/4 to T2, prob 1/8 to T3, 1/16 to T4 and so on. So probability survived, Bayesianism flourished, and Popper’s view is mostly forgotten.
In conclusion, I don’t know if conscious machines are possible, but suspect they are, but I do know that infinity should be handled with kid gloves.
Is hard determinism consistent with knowledge; that is, is it consistent with justified true belief? It’s the ‘justified’ condition that strikes me as problematic. If hard determinism is true, then wouldn’t my thoughts (including my belief in the truth of hard determinism) be the predetermined outcome of physical events in my brain? It may well be that natural selection favors my having certain (predetermined) thoughts in various circumstances, but the survival value of those thoughts is not necessarily the same as their truth value.
As a boy, when I first came across the stock syllogism, ‘All human beings are mortal, etc.’ it took a second or two for me to grasp its logic. My mental effort and subsequent understanding felt like the opposite of experiencing an automatic brain process; e.g., a startled reaction. And how would the ability to grasp a chain of formal logical reasoning have favored survival among the prehistoric environments under which such thinking would have presumably evolved?
In addition to your answer, I’d appreciate any recommended books or articles for further exploration of these topics. Thanks!
Greg also asked:
Hi, here’s one more question related to hard determinism: Is hard determinism utterly futile?
Here’s what I mean: Take the often heard argument that criminals should be treated leniently because (certainly under hard determinism) they aren’t morally responsible for their crimes. But, if we are to apply hard determinism consistently, a censorious judge can no more help being censorious than a criminal can help being antisocial. And the ‘bleeding hearts’ can’t do otherwise than bleed, and those who are moved can’t do otherwise than heed.
Like some vast Punch and Judy show set into motion, everyone does what the bouncing atoms bid them do. Our impact on each other is essentially the same as that of colliding billiard balls.
And if I despair that free choice is an illusion, even that despair is not my own, but just another predetermined swerve of the synapses.
And if I despair that even my despair is determined even THAT despair is not freely chosen.
Under hard determinism, I have no agency whatsoever. Contra the compatibilists, being a hand puppet is hardly an improvement over being a marionette.
A final irony: In the discussions of hard determinism that I’ve run across, the writers often lapse into addressing the reader as if they have a choice of how to react to their exhortations but I suppose the writers can’t help themselves.
Answer by Helier Robinson
First of all, the survival value of your thoughts IS the same as their truth value. False thoughts have no survival value except coincidently, such as: you avoid walking under a ladder, believing that this averts bad luck, and then do not get shot in a street shootout immediately after; but such coincidences cannot be relied upon. Whereas if you believe that learning to swim has survival value, so you learn to swim and one day fall overboard and manage to swim ashore, then your true belief did have survival value. More accurately, all thoughts that do have survival value have to be true, but not all true thoughts have survival value; if you prove to your own satisfaction what is the only value of n that satisfies the equation n plus n equals n times n, the result is true but is unlikely to have survival value.
Second, free choice almost certainly IS an illusion. A supposedly free choice is either caused, or else it is not caused. If it is caused then it is not free. If it is not caused then it is a chance event and so not willed, so not a free choice. Putting this another way, causal chains of events stretch into the past and into the future. A free choice is the start of a new chain, having no past antecedents; but how can that be?
So if determinism is true then you have no free choice. Tough. And if determinism is false then there are chance events but you still have no free will. Tough.
Answer by Jürgen Lawrenz
There are some conditions in which hard determinism is consistent with human knowledge, and other conditions where it is not. An example would be driving on a single-lane bridge, which gives you no option of turning. A counter-example is your last-minute decision to eat a fish burger instead of a steak sandwich, although you were hungry for a steak. Assuming this is a spontaneous decision, there is no possible knowledge of the momentary states of your various organs (including the brain) that facilitates a comprehensive explanation, so that claims on behalf of hard determinism are not consistent with knowledge.
To settle your concern about any such claims being ‘justified’, you have the simple expedient of demanding proof from any proponent. Thus anyone who tries to persuade you that the change of mind from steak to fish was brought on by the momentary state of the total chemical configuration of your body (which is of course determined by the immediate preceding states etc. etc.), is doing nothing better than illicitly extrapolating our relatively meagre understanding of causal chemical mechanics on organic processes where in the main they don’t pertain. Indeed many arguments in favour of hard determinism use intuition pumps as a preferred means of persuasion. Yet all conjectures about ‘momentary states’ (whether brain or body) ignore the fundamental fact that it is literally impossible to cut a temporal cross-section through a human body, or the human brain, with a view to ‘freezing’ the moment when a physical, chemical or mental configuration exists to justify the proposition. Moreover, it is impossible to say whether there is such a totality, nor can it legitimately be asserted that the claim itself makes any sense whatever – not to mention the time increments from one state to another.
I don’t wish to overstress another aspect, though it is relevant to the subject: Namely, that the source of this thinking is predominantly religious. I’m sure you can work this out for yourself.
In short, hard determinism, as applied to human intentionality, is a mere supposition, and often highly dogmatic in default of evidence in its favour. It may have its uses in some areas of intellectual and scientific effort, but on the whole it appears to me as a philosophically defective attitude, substituting conjectures (here a polite expression for sleight of hand) for the rigours of accounting for the interplay of spontaneity in living processes, which elude its grasp effortlessly.
Accordingly your gambit on evolution doesn’t work either. Thinking has very little to do with survival, as is shown by the fact that all creatures other than humans do no thinking at all about survival. Indeed, you might like to reflect on how many of your own thoughts are engaged in survival strategy; yet even if your answer is a (very high) ‘1%’, you would then have to wonder about the external conditions with which you are coping and how they and your survival thoughts managed to come together at the same moment in the small space you occupy. This is not discounting the probability that human survival may have been facilitated sometime in the pleistocene by an enlargement of the brain, though again it is more likely to have benefited our sensory and perceptive faculties than thinking.
As for writings on these matters, not knowing your level of expertise, it is difficult. But if you are patient and not exclusively sold on the latest gimmicks of this branch of philosophy, you could try Leibniz’s New Essays on Human Understanding. In my opinion, understanding is precisely the absentee on many pages written in favour of hard determinism – and this goes right back to the Bible!