You are currently browsing Admin’s articles.

Lisa asked:

How does Berkeley use Ockham’s Razor against John Locke?

Answer by Geoffrey Klempner

Another student assignment. I am going to make this easier for you, Lisa, by telling you what your teacher wants to hear. Then I am going to give my own view which you are totally free to ignore. In which case you don’t need to read past the third paragraph of my answer.

The story goes like this: In his Essay on Human Understanding, Locke gave an account of the origin of our ‘ideas’ — sense impressions and the concepts based on them — in terms of the interaction of our sense organs with material reality. Bishop Berkeley looked at this and thought, ‘Hmm, I can give just as good an account without positing this extra entity, ‘matter’. No-one ever experiences ‘matter’. All we experience are perceptions. On my theory, all statements about so-called ‘material reality’ are just conditional statements about actual and possible experiences.’

This is a classic example of the application of Ockham’s Razor, ‘Do not multiply theoretical posits unnecessarily.’ According to Berkeley, ‘matter’ is a theoretical posit that we can painlessly dispose of. Conditional statements about possible experiences are the ultimate truth about external reality. Job done.

First, a picky point. When physicists talk about Ockham’s Razor, they tend to mean something else than when a philosopher appeals to this principle. In physics, or science generally, not making unnecessary posits is a constitutive part of the task of constructing the most simple or elegant theory. The most elegant theory can still be false. We can get fooled by reality, things can be more complicated than we assumed, but in the long run we are less likely to be fooled if we follow the rule of preferring simple explanations to those that are unnecessarily complex.

In his Tractatus Logico-Philosophicus, Wittgenstein offers a radically different take:

If a sign is useless, it is meaningless. That is the point of Occam’s maxim.
(If everything behaves as if a sign had meaning, then it does have meaning.)
(Para 3.328)

Occam’s maxim is, of course, not an arbitrary rule, not one that is justified by its success in practice: its point is that unnecessary units in a sign-language mean nothing.
Signs that serve one purpose are logically equivalent, and signs that serve none are logically meaningless.
(Para 5.47321)

On Wittgenstein’s reading, what Berkeley is saying is not, ‘I can give a more elegant theory than Locke.’ Just read Berkeley, and you will see how wrong that is. He repeatedly makes the point that ‘matter’ is a meaningless notion, a horrendous invention of philosophers, while it is plain ‘common sense’ that all we know or can ever think about are our own perceptions.

But here’s the rub: the attempt to reduce statements about the external world to ‘conditional statements about actual and possible experiences’ is a catastrophic failure. (If you’re interested in pursuing this, read Chrisopher Peacocke Holistic Explanation: action, space, interpretation 1977.) Briefly, it is impossible to pin down ‘objects’ because every conditional statement refers to many, many more conditional statements. It’s like trying to solve simultaneous equations with too many unknowns.

I don’t think Berkeley thought the matter through to this point. It’s difficult when the only logic you know is the logic of Aristotle. However, what he did realize is that there is something fundamentally wrong with the notion that conditional statements can represent the ultimate truth about anything. A conditional needs a truth maker, a non-conditional fact in virtue of which the conditional statement is true. (If you’re inclined to doubt this, try it for yourself. Imagine that some conditional statement is ‘in fact’ the case, but there is no further non-conditional fact that accounts for its truth.)

Berkeley saw this quite clearly: his response was all our perceptions are ultimately explained by the virtual reality blueprint in the mind of God. My answer has already been long enough, so I won’t explore this aspect of Berkeley further. (Do a search, this is a topic that has come up before on Ask a Philosopher.)

So, we threw out matter and brought in… God?

Ockham’s Razor?!

Orlando asked:

Is the “junkyard tornado” argument of Sir Fred Hoyle for the existence of God as bad as Richard Dawkins seems to think it is?

Geoffrey Klempner

As I recall — from Dawkins’ 1991 televised Royal Society Christmas Lectures, which I sat through, spellbound — is that Dawkins accepts the ‘steep slope of improbability’ as a challenge which he believes can be met. Improbable as it may seem, computer modelling demonstrates that there is a series of relatively ‘small’ evolutionary steps that lead, e.g. to a fully-formed wing or an eye.

But suppose Dawkins is wrong. I don’t see that it really matters. We can go further and assume that Darwin’s theory of evolution is complete rubbish, just as the creationists say it is.

Imagine that I gave you a typescript of the works of Shakespeare, and told you that it had been typed out, without a single mistake, by my pet chimpanzee. You would have every reason to disbelieve me.

However, we know that there is a finite a priori probability that what I have described might take place — in some possible world. Just work out the total number of actions that a chimpanzee could conceivably perform on a keyboard, including tapping a key, pressing shift or return, etc. then put that number to the power of the number of characters, spaces, paragraphs in Shakespeare’s works.

The result is a very large number. But so what? That doesn’t show that it’s impossible. Only that I must be pulling your leg, by any reasonable standard for belief.

The story of a Boeing 747 being ‘assembled’ by a junkyard tornado out of aeroplane parts is more problematic, because one would first like to see a proof that various stages in the assembly are physically possible, regardless of improbability. For example, inserting rivets requires a riveting gun, otherwise you just don’t have sufficient steady force to do the job. Epoxy glue needs to be heated to the right temperature. And so on.

Well, let’s agree that the formation, say, of DNA from its chemical constituents is as improbable, or even more improbable than the chimpanzee story. The difference is that you and I are here, talking about this, so unless creationism is true in some form or other, we just happen to be extremely lucky. In our possible world — one in a gazillion — things turned out just fine.

Why believe that tall story rather than creationism? The familiar reasons, which I won’t repeat here. The argument I’ve just given isn’t going to convince anyone who is a true believer in the Bible, but it works just fine for any true believer in science who is against creationism on fundamental principle.

My gut feeling is that there’s a lot of work still to do before Darwin’s theory looks like a genuine theory rather than the most plausible conjecture. It’s only on the table because it doesn’t have any real competitor as a naturalistic account. I’m with Fred Hoyle that evolution is not exactly easy to believe.

As an historical digression, the Ancient Greek atomists didn’t have anything so fancy as Darwin’s idea about natural selection to work from. They may well have observed how when wet gravel is shaken in a sieve — as in panning for gold — the heaviest lumps move to the centre. In addition, by the principle of insufficient reason, there must be atoms of every conceivable shape, so when the ‘right’ atoms collide, they stick together like Lego bricks, eventually forming the world as we know it. (Computer model that!)

In other words, with only initial random motion, it is possible to have a physical system whereby entropy is reduced on purely natural principles. That was all the atomists thought they needed.

Philip asked:

Is it possible that our innate sense of self, our egocentric outlook on the world, could be wrong? After all, our brains are never REALLY connected; so we cannot know for sure that our consciousness is REALLY separated by anything else than space and time. What I mean is; Is it possible that there could be only a single universal consciousness of which we are all a part? Could this REALLY be how it is? Or are there any philosophical arguments against this view? I haven’t been myself since I first had this thought and I’m dying for an answer. On the one hand it feels exhilarating, on the other hand it kinda kills one’s self image. What do you people think?

Timothy asked:

“Are we in a simulation?”

Though it is not strictly a logical approach to ask a philosophical question by reflecting on a feeling, I do feel it helps to add meaning and perspective for the question. I wanted to ask what the chances are we are living in a simulation? One reason I ask this is because of the infinitely unlikely possibility when looked at scientifically that I would ever be alive and less yet to be a human and in the era I could write this. Either this is like winning the lottery 1 Million times in a row or something else is at work that I am alive, as a human (my opinion the best thing to be), in this era with all this technology. Thoughts?

Answer by Geoffrey Klempner

It may not be obvious that these two questions are connected, but I immediately thought of the philosopher Arnold Zuboff. I first met Arnold back around 1974-5, when he gave a paper at Birkbeck College Philosophy Society. As President for that year, it was my duty to entertain invited guests at a local restaurant. Over dinner, he hit me with this question: How unlikely is it that I ever came into existence?

The question seems to smash science into a pile of rubble. There is something science cannot explain, why I am in the world. However, that’s not Zuboff’s view at all. A few months ago, I came across Arnold’s YouTube video, Finding Myself — And Undoing the Fear of Death as Annihilation. (The video is over two hours long so you might want to make some sandwiches.)

In his presentation, Zuboff argues from science — or, rather, from a materialist view of the brain and its relation to consciousness — to the remarkable conclusion that there is only ONE subject, who is you, me, and every other conscious being in the universe.

As Philip says, ‘a single universal consciousness of which we are all part’. This is actually a view put forward, more or less tentatively, by Thomas Nagel in The View From Nowhere. It’s really just a way of looking at the ‘I’ question. The statement ‘I am TN’ when uttered by TN, or the statement ‘I am GK’ when uttered by GK, are both true and non-tautological, because the ‘I’ in each case refers to a singular entity Nagel calls the ‘Objective Self’.

The million-dollar question is what exactly this means. If it’s just a way of looking at the self and consciousness, then nothing is implied about the actual world that we don’t already know. Human beings are separate individuals, just as before. In other words, we’re just talking about a way of assimilating the consequences of materialism, getting comfortable with the idea. However, as an argument against the fear of death — which is what Arnold wants this to be — I don’t feel the least bit comforted by the thought that human life will go on after my material body perishes. I just don’t see the thing that Arnold ‘sees’.

On the other hand, if all this is just a simulation, if the entities of physics are not the ultimate reality but merely, say, patterns on an alien mega-computer chip or hard drive, then that puts a whole different complexion on things. I could be the one singular consciousness playing the ‘video game’ of human life, pretending to be, first, one particular person, then another particular person, then another. In the words of Alan Watts (whom I’ve quoted far too many times) ‘We are all It’. The singular entity, according to Hindu philosophy as articulated in the Upanishads, plays the game of forgetting who It is, and pretending to be you, me and everyone else.

I get it. We’ve all experienced what it’s like to remember something you had forgotten. Imagine if all these lost memories came flooding back at once. You would know that you were not who you thought you were, an ‘ego in a bag of skin’ (Watts The Book: On the Taboo Against Knowing Who You Are) but rather everyone. But I don’t believe that either. What problem would it solve? And wouldn’t the inevitable consequence be that I was, after all, alone in the universe? Just pretending to be a person in relation to other persons? How sad would that be!

For now, I prefer to think that, yes, it is a remarkable fact that I exist, but this isn’t necessarily the same ‘remarkable fact’ as the fact that GK exists. I don’t know what are the conditions for the existence or non-existence of the thing I am now calling ‘I’. Maybe, if GK hadn’t existed, I would be someone else, or maybe not. That’s all one can say until we have more to go on.

Milly asked:

Examine the view that our understanding of the universe is enhanced by our ability to distinguish appearance and reality.

Answer by Geoffrey Klempner

You shouldn’t judge a book by its cover, right? Raj, who joined the class at the beginning of term looks a bit of a dimwit. He sits at the back of the class, never asks questions and his conversation consists mostly of monosyllables. Wrong! Turns out that Raj has an IQ of 187 (and you can continue the story from there).

It’s a truism that knowledge and understanding are advanced by questioning our preliminary judgements or the way things first appear to us. Looks can be deceptive. On the other hand, your initial judgement can be absolutely on target. That happens too. Human beings have a remarkable ability to suss things out swiftly, a valuable survival trait gifted to us by evolution. With people, especially. You can be right in your initial impression then subsequently get taken in by a web of lies because you didn’t trust your ‘intuition’.

The ability to distinguish appearance and reality is not confined to human beings. Mimicry as a method of self-defence is a widespread feature of the animal and insect worlds, which has led to an evolutionary arms race between the resourcefulness of the mimics and the ability of predators to see through the mimicry.

To the best of my knowledge, however, non-human animals do not have any ‘understanding of the universe’, so let’s put that aside.

Histories of Western philosophy identify the Presocratic philosophers, Thales, Anaximander, Anaximenes as the first thinkers to question whether the world really is as it appears. Anaximenes held the remarkable view that every object in the universe is more or less compressed air. Despite appearances, you and I are compressed air. The chair you are sitting on is nothing but compressed air, and so on.

The Presocratics were the first thinkers to propose theories to explain the world and how it came to be the way it is. Up to that time, cosmogony, or accounts of how the universe came into being, was basically make-believe.

Here’s an example: once upon a time there was a male and a female god, who had sex and as a result the female god hatched a giant egg which cracked open revealing a fully formed world. — Dumb, right? However, as the Presocratic philosopher Xenophanes noted, the best theories of the time were ultimately not much more than guesswork. Human beings can never know what is really real, he thought. We entertain one another with more or less rational ‘accounts’ of the cosmos.

Fast forward 2500 years and we now have ways of testing at least some of these accounts. Einstein’s General Theory of Relativity predicted the bending of light by large gravitational fields, a prediction that was first confirmed in 1919. On the other hand, M-theory or ‘string theory’ as it’s known is still waiting a practical test and may never be capable of being verified or falsified.

So what?

So far so good — for science, anyway. Most theories are testable, and the ones that aren’t still provide a useful occupation for theoretical physicists who don’t like to bother with finicky experiments and are happy with just a whiteboard and coloured markers. (They are much cheaper to run than the ones who insist on having time on the Large Hadron Collider, so universities love them.)

In metaphysics, on the other hand, the appearance-reality distinction has proved catastrophic. Over the centuries, philosophers have proposed ever more elaborate ‘theories of everything’ or accounts of ‘ultimate reality’ — from Berkeley to Leibniz to Kant to Schopenhauer, Fichte and Hegel — none of which commands any credence except as an historical curiosity. As a result, metaphysics has become what exactly Kant feared, writing in 1781 in his Preface to the First Edition of Critique of Pure Reason:

“Time was when metaphysics was entitled the Queen of all the sciences; and if the will be taken for the deed, the pre-eminent importance of her accepted tasks gives her every right to this title of honour. Now, however, the changed fashion of the time brings her only scorn; a matron outcast and forsaken, she mourns like Hecuba: Modo maxima rerum, tot generis natisque potens — nunc trahor exul, inops.” [‘Greatest of all things by birth and power, now I am exiled and destitute.’]

Kant is as much to blame as any Western philosopher, with his hopelessly obscure theory of ‘Phenomena and Noumena’, which Hegel rejected only to replace it with an even more fantastical theory, according to which — despite all appearance to the contrary — ‘the Real is Rational and the Rational is the Real.’

Today, the term ‘metaphysics’ in English-speaking philosophy is largely used as a label for anything but the serious attempt to give an account of ultimate reality.

However, I happen to believe that it is possible to give an account of ultimate reality without invoking the appearance-reality distinction. If you want to know how, well that’s something I’m currently working on but you can start by looking at my blog Metaphysical Journal.

Jack asked:

I’m curious about the nature of “harm.” As I understand it, it is often required measurable damage of some sort to the other party. But does that mean then, as long as the party is ignorant of the act then no harm is done?

For example, X act is both illegal and immoral (in most cultures). If X act was done to an aware being, it would cause some form of harm (such as psychological, mental, social). To be clear, the harm done would require knowledge of act X. But if that same being were completely unaware of X act occurring to them, it would seem they would never notice the existence of X act and without awareness of X act, there could be no harm.

So then can it be said that X act isn’t an act of harm? That because it may not result in harm (if the other party is not aware of act X) in all instances, the most that could be said about it is that despite it being illegal and immoral 100% of the time, it isn’t an act of harm itself, it can only result in harm under certain conditions. Is this accurate?

The confusion lies in the idea of “harm” being an affront to another, regardless of consequences or measurable harm. If the other party is devalued in some way, regardless if they acknowledge it or feel effects from it, by devaluing a human being it can still be said to be harming them.

Answer by Geoffrey Klempner

The question of harm and its definition typically arises in relation to J.S. Mill’s Liberty Principle

“The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.” (On Liberty, 1859)

But what exactly constitutes ‘harm’?

Before we go any further, we need to clear up an ambiguity where you say, ‘as long as the party is ignorant of the act then no harm is done’.

Let’s say that by subterfuge I get hold of your Ethics exam script before it reaches the examiner and make changes that result in your achieving a lower score than you would have otherwise. A few pages go missing. The word ‘not’ is added in a few places, and removed in others. When the results come in, and you see that you have failed the exam, you have no idea at all that any harm was done to you. You blame yourself for your bad performance.

That would be a case of harm even though you were unaware that you had been harmed. What you are very much aware of, is the result of my malicious action: your disappointing exam result. That’s harm by any reasonable definition.

But what about a case where you don’t experience any negative effects of my action? Let’s say as I brush past you in the street, I stick a piece of paper on your coat with the words, ‘I am a dork.’ You carry on your way, blissfully unaware that people are sniggering behind your back. By the time you reach home, the piece of paper has fallen off. (Someone you know might see what I did and tell you, but for the sake of the example we’re assuming that doesn’t happen.)

As in the previous example, here is a flagrant case of harm, even though this time you experience no negative effect whatsoever.

More interesting and controversial, are cases where a person feels ‘affront’, feels that they have been ‘harmed,’ but despite feeling that way no harm has in fact been done to them. An example that has been cited is someone who feels affronted by the sight of a gay couple kissing in public. You have no right to feel that way, so no real ‘harm’ has been done, we would say.

The problem with this is that a different judgement would be made in New York or in Tehran. What would Mill’s view be? The whole point of the Liberty Principle is to cut through ethical disputes that depend on this or that person’s feelings or ‘intuition’ about what is right or wrong. Feelings of ‘harm’ that result solely from your holding ethical or religious beliefs that are different from someone else’s ethical or religious beliefs are not examples of real harm.

Mill wanted to see vigorous debate between different ethical and religious, or indeed anti-religious, views. Provided the debate is polite, he believed, and not conducted in an atmosphere of animosity, no ‘harm’ is done to any side.

Then what if I make a ‘V’ sign to your face, clearly intending to insult you? Maybe you feel deeply hurt and affronted. ‘You shouldn’t be so sensitive,’ I might say in my defence. The difference here concerns accepted conventions. The ‘V’ sign is widely read as an insult, and it was indeed my intention to insult you that resulted in your feeling insulted, i.e. harmed.

Bad language, or rude gestures, where they are meant to be hurtful to another person, are cases of ‘harm’. Whether the actions actually cause upset or psychological discomfort in a particular case is in fact irrelevant. What is important is the intention. By making a ‘V’ sign to you, intending this as an insult, I have caused you ‘harm’ — something concerning which, in Mill’s words ‘power can be rightfully exercised over any member of a civilized community’ — even if you just laugh it off. You ought to take exception to what I have done, because it was not acceptable behaviour, by Mill’s Principle.

On the other hand, hurting someone’s feelings, by making a statement that you had every right to make, is not an example of ‘harm’. Nor is punishment a case of ‘harm’, when justified and administered in the appropriate context. All this goes to show that the question of ‘harm’ is a question about rights.

Mill’s starting point is that I have the right to do any action if the only person harmed is myself. Others may be hurt by my actions, but they are only ‘harmed’ when a certain line has been crossed. Defining that line has proved to be quite difficult in practice.

You will find more on this topic in my answer Problems with J.S. Mill’s harm principle.

Ozzy asked:

Heraclitus says that everything is changing all the time. List at least two problems with believing this. Do you think that they can be overcome?

Miriam asked:

Could you please tell me what Socrates speech in Plato’s dialogue ‘Parmenides’ is about? in your opinion?

Paula asked:

Explain what it means for Plato to side with Parmenides more than Heraclitus.

Peter asked:

How did Plato resolve the problem of permanence?

Answer by Geoffrey Klempner

I’m taking these questions together because they all relate in one way or another to Plato’s theory of Forms. Parmenides and Heraclitus were Plato’s great predecessors. I am going to say something controversial here: Plato agreed with Parmenides and he also agreed with Heraclitus. They were both ‘right’ as far as he was concerned.

What are Forms? The common explanations I’ve seen of this are misleading at best. i remember as a school student in Chemistry first hearing about Plato’s belief that there is an ‘Ideal Table’, a heavenly Table that all actual tables more or less closely resemble. And similarly for all other things that we recognize and give a name to. This is complete piffle, as I will explain below.

In answer to each question:

No, Ozzy, there was no real problem for Heraclitus in the notion that ‘everything is changing all the time’ although there seems to be. If everything is changing all the time, how is it that we are able to refer to things, or recognize something as ‘the same again’? If everything is changing all the time how is it that all sorts of things that we see around us don’t seem to change?

The answer isn’t ‘some things change very slowly so it isn’t a problem’ (I’ve actually seen this proposed by a dimwitted commentator) but rather that there is one thing that is universal and can never change: the Logos. The Logos describes the rules by which things appear to change or not change, transform into other things rapidly or over a longer period of time. If you have Logos you don’t need permanent ‘stuff’ as well. All there is, is the Logos and appearances. You can think of the Logos as ‘the laws of nature’ but Heraclitus had something much more abstract in mind then a specific set of rules. Logos is rationality, reason itself. The human soul is part of, or participates in the Logos. That is how you and I are able to reason things out — because of a fundamental ‘fit’ between our minds and reality. Brilliant!

Miriam, I don’t know which ‘speech’ you are referring to specifically but I can guess. In Plato’s late (and arguably greatest) dialogue Parmenides the young Socrates meets the elderly Parmenides (a meeting that so far as we know did not actually take place). In the first part of the dialogue, Parmenides quizzes Socrates on his thoughts about the Forms, then in the second part Socrates turns the tables and gives Parmenides’ theory of the One a thorough working over.

The first question Parmenides puts to Socrates is deceptively simple: what sort of things have Forms? Is there a Form for hair? How about a Form for mud? Oh, no! says Socrates, not that kind of stuff. ‘That is because you are still young,’ remarks Parmenides, condescendingly, ‘When you are older you will learn not to despise such things.’

Why would there be a Form for hair? Human beings have hair, it is one of their constant, universal attributes. Why is that? Why are any attributes of anything universal? Why don’t some ‘human beings’ have metal spikes instead of hair? Why do we find life in general divided into kinds? Mud looks more like a random a mixture of stuff — you can have every kind of mud, and between any two samples of different kinds of mud, you can mix the two together. But if you think of mud as made from water and earth, then you may begin to see that the physical properties of mud are not so random after all.

Paula, you are just plain wrong in saying that Plato ‘sided more with Parmenides than Heraclitus’. Or, rather, your teacher is. (I’m guessing that this was a question you were given.) Parmenides realized that apart from all the things we talk about or perceive around us, the things about which we say, ‘it is’, or ‘it is not’, there has to be something that simply IS, full stop. Things change IN reality, but reality, what IS, cannot change. He called this the One. There are various ways of spelling this out — and much controversy over exactly what Parmenides meant — but I’ll keep things simple. A few days ago, you posted your question on Ask a Philosopher. That’s a fact. It will still be a fact in 1000 years time, or after the human race has become extinct. What IS, is, and can never not be.

As I’ve already explained, Heraclitus is not in fundamental disagreement with the idea that what IS is unchanging. The Logos IS. The notion that the Logos could be different from one day, or millennium, to another is something Heraclitus would never have entertained for one second. Reason just IS reason. ‘Fire’, ‘strife’, ‘war’ describe both metaphorically and literally the result of the continuing and unchanging hand of the Logos that ‘governs all things’.

Finally, Peter. What is the problem of permanence? Reading Plato’s earlier Socratic dialogues, the same question comes up again and again: how is it that we seem to have notions of human virtues, like ‘justice’, ‘courage’, ‘temperance’ — show by the fact that we just know when an offered definition is wrong? His answer was, because these things simply are, and never change. What human beings mistakenly or correctly call ‘just’ or ‘unjust’ is determined by the unchanging fact of Justice itself.

But, as I’ve already explained, concrete reality shows the same features as our abstract ethical concepts. Gold is always gold, horses are always horses. Gold will always have the same density. Search as hard as you like, you won’t find a continuum of animals ranging between a typical horse and a typical lion. Because they fall into ‘natural kinds’. The notion of an underlying explanation of things falling into kinds in terms of the atomist theory of Democritus and Leucippus seemed at the time far fetched or even impossible. By the principle of ‘insufficient reason’ there are atoms of every shape and size, moving randomly. How could that possibly give rise to gold, or horses?

It could be argued the greatest invention of all time was the lens (according to Wikipedia, some time between the 11th and 13th centuries) because it led to the discovery of the ordered world of the microscopic. and eventually the possibility of explanation in terms of microstructure — the laws of physics, chemistry and biochemistry. Take that away, and there’s a massive gap that can only be filled by the notion that natural kinds arise from a Logos, an ultimate classifying principle built in to the very nature of reality itself. In other words, Plato’s Forms.

What about the ideal Table, then? Plato actually uses this as an example in his dialogue Republic to explain the Forms, but he doesn’t mean simply that there is a Form of the table, in the same way as there is a Form of the human. Tables exist in a variety of shapes and sizes, but anything that is literally a ‘table’ that we use as a table (by contrast with Table Mountain, or a doll’s house table) references human need. A table is like the ground, but higher up for convenience. You can deduce the range of constructed items that satisfy the requirements of being a table from the Form of the human. For example, the range of possible lengths of a table leg. Unlike human beings, tables are made by carpenters according to specifications and plans. Folding, not folding, square, round, rectangular, and so on. This is like the way the Form of the human gives rise to the variety of human beings. But only like. It’s an analogy, nothing more.

CLICK TO ASK A QUESTION

Click to ask a question

Are you a philosophizer?

PHILOSOPHIZER by Geoffrey Klempner on Amazon

PHILOSOPHIZER by Geoffrey Klempner on Amazon

=== Solve this riddle ===

Solve this riddle

Calendar

April 2018
M T W T F S S
« Mar    
 1
2345678
9101112131415
16171819202122
23242526272829
30