I have been thinking about truth quite a lot recently. "Isn't that what philosophers are supposed to do?" Not really.
I guess I have been a card carrying Pragmatist for some time now, and as such I am usually pretty reluctant to make a big philosophical deal out of truth itself. I am more interested in what we might call 'truth markers' - evidence, justification, the social conditions for generating truth, on so on. Besides, most philosophical talk of truth is either banal (it reduces to 'truisms') or infected by the nasty bug of realist metaphysics (more about that on another occasion).
However - and, there is nearly always one of those - Pragmatists are also fallibilists. They are practised in the art of holding firm to the truth of something while keeping aware that it could turn out to be false. Since I know my views could be mistaken, I like to test them now and then by, as Nietzsche puts it, "thinking against myself".
So I have been doing a lot of metaphorical head-scratching lately: "Have I got it all wrong about truth?"
Then this line of argument popped into my head. I'm sure it's fallacious, but I can't yet see why.
And, it seems to cement my Pragmatist views even more firmly in place. No more head scratching for the time being.
Here's the argument (the brief version): "Truth/true" have no philosophical synonyms. Suppose X is an appropriate synonym, then 'Theory of X' will do all the philosophical work of 'Theory of Truth'.
But, after centuries of effort, nobody has been able to come up with an X that fits this bill. Now, since language is not like mathematics, it is highly unlikely that there is such a synonym - it's not that language is opaque in the way that mathematics can be (so there is no word that would do the job, hiding somewhere on the far shores of language or deep in the pages of some huge dictionary). But, we needn't go into this (it could trap us in the thickets of realism again). We can simply say that for working purposes, there is no known synonym. If there is no synonym for it, the word "truth" is not very informative (we know what to do with it, it serves a social function, but it doesn't 'say' anything directly -anything than can be substituted for it).
But, it seems to then follow that it is impossible to believe something simply because it is true.
Suppose I say to you "You should be believe P" and when you ask why, I say "Because it is true" - then you start behaving exactly as if you believe P.
What is going on? Can you have come to believe P solely because it is true? Surely not. Since "true" has no synonym, I might as well have said "Believe it because K" without telling you what K stands for. Your coming to believe P must have involved something else: either your belief that when I say something is true it is usually true (as when you value my opinion of wine and I say that the 1995 merlot is a good wine) or something in the content of P that links up with something else, say evidence, that leads you to believe P.
For Pragmatists, truth is like a stamp of approval - not that interesting in itself. The important philosophical issues surround the circumstances in which it is awarded correctly, and the implications thereof.
Wednesday, May 20, 2009
Wednesday, May 13, 2009
What if things are going to end very badly?
Now when suffering is marshalled forth as the first among the arguments against existence, as its nastiest question mark, one would do well to remember the times when one made the reverse judgement because one did not wish to do without making-suffer, and saw in it an enchantment of the first rank, an actual seductive lure to life.
Nietzsche
Here is something to think about before you consider taking an intellectual holiday. If the world is going to end badly, is this a bad thing now? Normally, I guess we think of something as good or bad without relativising it to time. So what am I getting at?
The chances are that the world is going to end badly, and not just badly, but very badly. My question is: "If it is going to do so in the very distant future, does that matter now?"
Or, to put it another way, "Are distant events morally insulated from us by their distance from us?"
Why do I say the world is likely to end very badly? Well, the most likely scenario, leaving aside humanity's prior self destruction, is that the earth, and any planets we may have inhabited by then, will be burnt up by the sun. If the human population is very large, this will constitute an immense disaster. More people could die in the conflagration than have ever died before then. Pretty bad?
Intuitively, we might think: "Yes - obviously. We can now say categorically that this is a bad thing, 'distance' does not come into it?"
But again, very bad? And, when reflected on now, when it apparently need have no unwelcome practical consquences for us?.
Again we might simply think: "Of course. And, that's the end of it."
But, if it is going to be a disaster on a huge scale, shouldn't it have some causal effect on us now - other than that of getting us to agree that it's a very bad thing when we are asked about it?
What kind of causal effect? Here, I am not thinking about purely psychological effects - though it might be rational to get somewhat depressed by even the thought that the world can only end badly for us.
Here is a tough question: "Could a very bad ending subtract the value from human life now, rather like a bad enough ending to a play can ruin it in its entirety?" Is the issue as to whether each individual human life is worth living undecideable - because it cannot be settled until the final curtain drops on life as a whole?
I am wondering whether we can use such conjectures to test our value systems. Are they insulated from distant events, are they local in that sense? Do they only apply to what happens in our vicinity, so to speak, even when they are veiled in absolutist/universalist terms?
What about a person who has lived a good life, one that we would all agree was a valuable life. Can the value of their life be destroyed by something external like the fact that it is all going to end very badly for everyone else in the distant future?
If we make a mental survey of human life over the fulness of time, we might picture a huge pile of meaningless misery that gets multiplied in size at the very end, with little pockets of light flickering through the mess from the 'valuable' lives lived up to that point. Do those lights mean anything? Can they signal the existence of value in isolation? Or does the total pile of misery overwhelm their significance (we leave animals out of this, but their pain and suffering could also be factored in)?
Looked at this way, the universe can seem like a pretty efficient machine for generating bad outcomes for sentient creatures (with just enough good outcomes over the shorter term to keep the show going).
But, human life is extraordinarily worthwhile despite the suffering it entails - isn't that obvious? Surely, only a morbidly out of touch philosopher with too much thinking time on their hands would doubt it?
But, wait a minute. Does that mean "However much suffering is involved"?
If you were given the option to build another universe just by pressing a button, with the one condition that it would mimic ours in every detail (so it would contain the same amount of bad things happening to people, and the same bad ending), would you press the button? Should you? Or, more to the point, is it obvious that you should?
Note that any optimism on this score needs to be carefully separated out from distortions caused by our own comfortable place in the scheme of things. If you are reading and understanding this, you are very special - privileged, even (nothing to do with me!). To see why, first step back from our universe. Then, imagine taking part in a lottery where each ticket stands for one of the many people who have been born up to this point in history. Take a ticket. Your prize? You are now that person.
What are the chances of you being as well-off (materially and psychologically) as the person you were before you stepped back?
If this line of thinking starts to persuade you not to press the button, then what does that imply about our world? That it would be better if it had never existed (given that it is physically indistinguishable from the rejected possible world)?
Perhaps future generations will have to cut down the population. Perhaps, if science can predict when the bad ending is likely, the human race will have stopped breeding before everything burns up, so there is no bad ending for anyone. Is that any consolation?
And, what should we make of the enforced finitude that the best scenario for us implies?
Another thought: are we local? Could the creatures destroyed when the world ends be so different from us, that we would only identify with them weakly - we don't even think of them as 'humans'? Here, we might decide: "Well yes, it's very bad. But for them them, not for us?" However, if we thought they were much better creatures than us morally speaking, we might feel that the envisaged ending will be even worse than if it happened to us. Conversely, if we do not think much of them on that score, we could say "Well, it might not be such a bad thing after all?"
However, there would be another story to tell about what had happened to us in the intervening period. And, presumably that would concern us greatly.
Nietzsche
Here is something to think about before you consider taking an intellectual holiday. If the world is going to end badly, is this a bad thing now? Normally, I guess we think of something as good or bad without relativising it to time. So what am I getting at?
The chances are that the world is going to end badly, and not just badly, but very badly. My question is: "If it is going to do so in the very distant future, does that matter now?"
Or, to put it another way, "Are distant events morally insulated from us by their distance from us?"
Why do I say the world is likely to end very badly? Well, the most likely scenario, leaving aside humanity's prior self destruction, is that the earth, and any planets we may have inhabited by then, will be burnt up by the sun. If the human population is very large, this will constitute an immense disaster. More people could die in the conflagration than have ever died before then. Pretty bad?
Intuitively, we might think: "Yes - obviously. We can now say categorically that this is a bad thing, 'distance' does not come into it?"
But again, very bad? And, when reflected on now, when it apparently need have no unwelcome practical consquences for us?.
Again we might simply think: "Of course. And, that's the end of it."
But, if it is going to be a disaster on a huge scale, shouldn't it have some causal effect on us now - other than that of getting us to agree that it's a very bad thing when we are asked about it?
What kind of causal effect? Here, I am not thinking about purely psychological effects - though it might be rational to get somewhat depressed by even the thought that the world can only end badly for us.
Here is a tough question: "Could a very bad ending subtract the value from human life now, rather like a bad enough ending to a play can ruin it in its entirety?" Is the issue as to whether each individual human life is worth living undecideable - because it cannot be settled until the final curtain drops on life as a whole?
I am wondering whether we can use such conjectures to test our value systems. Are they insulated from distant events, are they local in that sense? Do they only apply to what happens in our vicinity, so to speak, even when they are veiled in absolutist/universalist terms?
What about a person who has lived a good life, one that we would all agree was a valuable life. Can the value of their life be destroyed by something external like the fact that it is all going to end very badly for everyone else in the distant future?
If we make a mental survey of human life over the fulness of time, we might picture a huge pile of meaningless misery that gets multiplied in size at the very end, with little pockets of light flickering through the mess from the 'valuable' lives lived up to that point. Do those lights mean anything? Can they signal the existence of value in isolation? Or does the total pile of misery overwhelm their significance (we leave animals out of this, but their pain and suffering could also be factored in)?
Looked at this way, the universe can seem like a pretty efficient machine for generating bad outcomes for sentient creatures (with just enough good outcomes over the shorter term to keep the show going).
But, human life is extraordinarily worthwhile despite the suffering it entails - isn't that obvious? Surely, only a morbidly out of touch philosopher with too much thinking time on their hands would doubt it?
But, wait a minute. Does that mean "However much suffering is involved"?
If you were given the option to build another universe just by pressing a button, with the one condition that it would mimic ours in every detail (so it would contain the same amount of bad things happening to people, and the same bad ending), would you press the button? Should you? Or, more to the point, is it obvious that you should?
Note that any optimism on this score needs to be carefully separated out from distortions caused by our own comfortable place in the scheme of things. If you are reading and understanding this, you are very special - privileged, even (nothing to do with me!). To see why, first step back from our universe. Then, imagine taking part in a lottery where each ticket stands for one of the many people who have been born up to this point in history. Take a ticket. Your prize? You are now that person.
What are the chances of you being as well-off (materially and psychologically) as the person you were before you stepped back?
If this line of thinking starts to persuade you not to press the button, then what does that imply about our world? That it would be better if it had never existed (given that it is physically indistinguishable from the rejected possible world)?
The sort of questions we have been posing, questions about our place in the whole scheme of things, are what we can call cosmological questions. Hence, I signal 'Cosmology' as one of my interests in my profile. We seem to be one of the few civilisations that tries to operate without a cosmology, without a sense of how we fit into 'the whole scheme of things'. Instead, we have bogus cosmology, new age versions that tell us the universe owes us a decent living, and so on. Unfortunately, with a few notable exceptions (e.g. Derek Parfit and John Leslie), philosophers are not interested in the interesting questions here.
Perhaps future generations will have to cut down the population. Perhaps, if science can predict when the bad ending is likely, the human race will have stopped breeding before everything burns up, so there is no bad ending for anyone. Is that any consolation?
And, what should we make of the enforced finitude that the best scenario for us implies?
Another thought: are we local? Could the creatures destroyed when the world ends be so different from us, that we would only identify with them weakly - we don't even think of them as 'humans'? Here, we might decide: "Well yes, it's very bad. But for them them, not for us?" However, if we thought they were much better creatures than us morally speaking, we might feel that the envisaged ending will be even worse than if it happened to us. Conversely, if we do not think much of them on that score, we could say "Well, it might not be such a bad thing after all?"
However, there would be another story to tell about what had happened to us in the intervening period. And, presumably that would concern us greatly.
Saturday, May 9, 2009
Intellectual Package Holidays Revisited
The cultures of the developed West are fraught with paradoxes. One of them concerns the disparities between the overall intellectual ethos in such cultures and the intellectual resources that are readily accessible. With regard to 'ethos', the dominant trend is towards dumbing down. There is no need go into the dismal details. But, at the same time, if an individual wants to upgrade their thinking, the volume (and quality) of resources available is astonishing.
Hence, there has probably never been a better time to take advantage of my free Intellectual Package Holidays offer.
From time to time. I will make suggestions about where to go for an intellectual holiday. For reasons of safety, I will only recommend places I have already visited and know well. This might sound a bit restrictive, but don't worry, I have done a lot of mind travelling. Furthermore, as you get the hang of things, you will be able to make up your own itinerary. Indeed, I would welcome your suggestions in the 'comments' section.
First-off, I recommend going to Ancient Greece. That might sound like an obvious suggestion, coming from a philosopher. But it's not. Why? I will tell you next time. How do you get there?
Also next time. However, if you want a preview, read The Last Days of Socrates.
Hence, there has probably never been a better time to take advantage of my free Intellectual Package Holidays offer.
From time to time. I will make suggestions about where to go for an intellectual holiday. For reasons of safety, I will only recommend places I have already visited and know well. This might sound a bit restrictive, but don't worry, I have done a lot of mind travelling. Furthermore, as you get the hang of things, you will be able to make up your own itinerary. Indeed, I would welcome your suggestions in the 'comments' section.
First-off, I recommend going to Ancient Greece. That might sound like an obvious suggestion, coming from a philosopher. But it's not. Why? I will tell you next time. How do you get there?
Also next time. However, if you want a preview, read The Last Days of Socrates.
Subscribe to:
Posts (Atom)