I often mention the characterization postulate on this blog, and since it is such an important tool in object theories of all sorts, I figure it is time to provide a brief overview of just what it is supposed to be.
So why do object theorists need a characterization postulate (CP)? Well, the answer is because such a postulate is required for the epistemological adequacy of object theory. Existent objects do not pose a problem, because we can discover the properties of these by extensional means, i.e. through the use of empirical evidence. But such a procedure is for the most part unavailable for us when it comes to nonexistent objects. I say "for the most part" because we can indeed discover some properties of nonexistent objects through such means as dreams or hallucinations, but these procedures are not at all exhaustive.
So what we require is a logical, e.g. a priori, means to discern the properties of nonexistent objects, and this is where the CP comes in. In essence, the CP is a logical tool which allows us to do that. To express how important it is to object theory, the CP appears quite early in the process of logical construction. Indeed, once we have added descriptors to zero-order logic we can already bring the CP into play (but we needn't go into the technical details of that here).
But I should note that I have been writing as if the CP is one unique thing. This, however, is untrue; for we have many different CPs. The most natural one is the Unrestricted Characterization Postulate (UCP). This runs as follows:
UCP: An object has exactly those properties it is characterized as having.
This is quite natural and does a lot of work. Indeed, it is surely the first CP that comes to mind for the object theorist, and it is no doubt used in much argument and informal reasoning. But unfortunately, the UCP cannot be true tout court. This is for one very simple but devastating reason: namely, it allows us to prove the existence of any object whatsoever.
Consider the following object: "The existent non self-identical spider-eyed lamb". Let's call this object L. By the UCP, it follows that L is existent. But it is obvious that L is non-existent (it violates the law of identity). Therefore, the UCP is false.
Now one might think we can get around this problem by somehow weakening our logic, by analogy to how we can avoid the paradoxes of naive set theory by weakening the underlying logic. But this option is not available to us, for the problematic consequences of the UCP do not depend upon any axioms or inference rules; rather, they only depend upon the presence of descriptors (e.g. term forming operators like "a", "an", "the", etc.) And since eliminating descriptors from our logic is completely out of the question, we must look elsewhere for solutions.
So it is clear the UCP doesn't work. What is the object theorist to do? Well, he could very well just persevere with the UCP. In effect, he would have to use heuristic rules in order to avoid the untoward consequences. No doubt this can certainly be done (and indeed there is an analogy with how many of the textbooks for classical logic make use of naive set theory, even though naive set theory paired with classical logic leads to triviality), but such a route puts the object theorist on unsure logical footing.
Thus, it would seem that a better option would be to suitably restrict the CP. A radical restriction is what we might call the Existential Characterization Postulate (ECP).
This is as follows:
ECP: If an object exists, then it has exactly those properties it is characterized as having.
The ECP is surely true and quite unobjectionable. Indeed, it is true under the mainstream philosophical theories such as empiricism, idealism, and materialism. But for a full object theory the ECP will not do. For it is both far too restrictive (in that it tells us nothing about nonexistent objects) and it is technically redundant (since we already have empirical means at our disposal for discerning the properties of existent objects). So we will need to look elsewhere to find a CP that does some real work.
One way to do so is by expanding the ECP to what we might call the Possibilist Characterization Postulate (PCP). This runs as follows:
PCP: If an object is possible, then it has exactly those properties it is characterized as having.
This will no doubt appear quite attractive to philosophers of a rationalist persuasion. But while it might seem to be a real advance upon the ECP (since now we are able to do real work in discerning the properties of nonexistent objects),this is merely illusory. For, in one sense, the PCP is too restrictive; but it another sense, it is far too permissive.
Let us first consider how it is too restrictive, by referring back to our old friend L. What does the PCP tell us about this? Well nothing at all; because L is an impossible object, and the PCP tells us only about possible objects. Now of course, the rationalist object theorist won't actually consider this to be a weakness, since for him no object is impossible. But the advantages that consideration of impossible objects bring (which are too numerous to go into fully here, but they include such benefits as a resolution of the semantic paradoxes) makes this in my opinion an unacceptable stance to take.
Secondly, the PCP is too permissive because it still allows for the unacceptable ontological arguments mentioned earlier, although of course only restricted to possible objects. For the existent golden mountain (call it M) is certainly a possible object. So by the PCP, M exists. Indeed, it seems that something like the PCP is at work in both Descartes' ontological argument and in the principle of plenitude (viz. the notion that every possible object exists).
Of course, we can duly restrict the PCP, leading to what we might call the Qualified Possiblist Characterization Postulate (QPCP), which runs as:
If an object is possible and does not exist, then it has exactly those properties it is characterized as having.
The QPCP certainly gets rid of the untoward ontological consequences of the PCP, but it is still too restrictive. The classical rationalist who wants to avoid the ontological argument and the principle of plenitude will no doubt rest easy with it. But I think we can do better,
Now, instead of restricting the CP by only applying it only to certain types of items (as the previous postulates do), we can restrict it in other ways too. One quite natural way is by applying it only to certain types of properties. A familiar distinction among object theories is that between nuclear and extranuclear properties. In brief, nuclear properties are ordinary properties of individuals. In other words, they are just those features which delineate what we might call the 'nature' or the 'essence' of an object, while extranuclear properties do not. Alternatively, we might say that nuclear properties apply directly to the object, whilst extranuclear properties in some sense depend upon the object's nuclear properties.
Such a distinction may appear ad-hoc to some, but it actually has a clear pedigree within the philosophical tradition; refer to Kant's distinction between determining and non-determining predicates, or to the Frege-Russell distinction between first-level and second-level functions.
Perhaps the simplest way to lay out this distinction is to list some examples. Standard nuclear properties include such garden variety properties as 'red', 'tall', 'kicked', walked', etc. Extranuclear properties include such things as: ontological properties (viz. 'existent', 'nonexistent'), logical properties ('is consistent', 'is inconsistent'), status properties ('is contingent', 'is impossible',) and converse intentional properties ('is thought about by Larry,' 'is dreamed of by Ron',).
With this distinction in mind we can now formulate a Nuclear Characterization Postulate (NCP), delineated as:
An object has only the nuclear properties it is characterized as having.
It is clear that the NCP allows us to completely avoid the problem of being able to simply define objects into existence (since existence is an extranuclear property) and it is also expansive enough to account for impossible objects. So as a theoretical device the NCP is quite attractive, but it does have its own problems. The first problem is that it leads to untoward consequences concerning relations between existent and nonexistent objects. Consider the fact that Sherlock Holmes lives at 221 Baker Street. By the NCP, Holmes inhabits 221 Baker Street. But Baker Street is an existent object, and it was never inhabited by Holmes, since it is verifiable through empirical means that it never contained Sherlock Holmes as a resident.
For a natural way around this difficulty, we can formulate a Qualified Nuclear Characterization Postulate (QNCP), as follows:
An object has only the one-place nuclear properties it is characterized as having.
Naturally, the QNCP requires that we have some means at our disposal to reduce multi-place predicates to one-place properties. There are several ways to do that, and we needn't go into the technicalities here. But suffice it to say, it is clear that if Holmes has the one-place property 'inhabits-221-Baker-Street', it does not follow that 221 Baker Street has the on-place property 'is-inhabited-by-Sherlock-Holmes', since one-place predicates generally do not imply other one-place predicates, unless we have suitable axioms or meaning postulates in place.
But, as should be no surprise by now, there is yet a further problem lurking in the background, and indeed, it's a problem facing all the previous postulates. Namely, how are we to distinguish between such objects as 'the round square' and 'the existent round square'? The QNCP does not tell us whether these characterizing descriptions denote separate objects or one and the same object. One route we can take is to simply delete the extranuclear property of 'existent' from the second characterization, and conclude that both descriptions denote one and the same object.
But we can also avoid the problem by a new and expanded characterization postulate. We might call this the Suppositional Characterization Postulate (SCP). This is as follows:
An object has the one-place nuclear properties it is characterized as having and for every extranuclear predicate P it is characterized as having, it presents itself as having P.
The idea in its fleshed-out form is due to Routley, but it has roots going all the way back to Meinong's notion of "watered-down properties". Essentially, what we are doing here is systematically producing nuclear analogues of extra-nuclear properties. We can easily see how the above problem is then solved: the existent round square presents itself as existing, while the round square does not.
It seems that we might have pushed the characterization postulate as far as it will go. SCP doesn't appear to run into the types of untoward consequences which the previous CPs ran into, and at a first glance it appears that we cannot extend it any further without running into the triviality problem of the UCP. But that is actually not the case, for there is indeed a CP that is equal in scope to the UCP, but which does not run into triviality. This is the Qualified Characterization Postulate (QCP). It runs as follows:
An item has all the properties it is characterized as having at some world or other.
The QCP really does all the work which the UCP tries to do, except that work is made logically tractable through worlds semantics. It is important to note that the worlds in use here are not merely restricted to the possible worlds of modal semantics; rather, the QCP makes full use of ultramodal worlds, such as incomplete, inconsistent, and open worlds. (We could very well restrict it to only possible worlds, and thus we would have a modalized version of the PCP. Jaakko Hintikka seemed to have just such an idea. But I would still say that this is far too restrictive). Note also how it solves the triviality problem: we can indeed run an ontological argument to prove the existence of any item, But that does not mean we have proven that the item exists at an actual world. Indeed, it might very well exist only at impossible worlds. Thus it would still be nonexistent at actual worlds.
So that is where our journey ends. To be sure, we have skipped over some CPs one can find in the literature; but these are generally quite technical and beyond the scope of this post. But now we face an important question: which CP should we use? Object-theorists have given different answers to this question throughout the centuries. Meinong held to something like the NCP. Neo-Lockean object theories like that of Parsons tend more towards the QNCP. Classical item theory employs the SCP. Priest and Berto's 'Modal Meinongianism' uses the QCP.
But there's no a priori reason why we should only use one CP; for we can indeed use a variety of them, as the circumstances dictate. Indeed, this is the idea behind the pluralized item theory in Routley's later work; i.e. different sorts of CPs apply to different sorts of worlds. For instance, the SCP might apply at actual worlds, the PCP can apply at possible worlds, and the UCP can apply at some impossible worlds (with triviality now not being a problem, since we should expect some impossible worlds to be trivial). In fact under this approach the QCP becomes redundant, seeing as our plurality of CPs can do everything the QCP can. Indeed, it can do even more, since now we can determine the properties nonexistent objects have at actual worlds, a question Modal Meinongianism leaves unanswered (this is why Priest and Berto have to appeal to existence-entailing properties, as we've discussed in a previous post).
So as we can see, the Characterization Postulate is a deep and fascinating aspect of object theories that is worth careful study. There is much more to be said about the topic, but now is as good a stopping point as any.
Monday, January 18, 2021
I often mention the characterization postulate on this blog, and since it is such an important tool in object theories of all sorts, I figure it is time to provide a brief overview of just what it is supposed to be.
Sunday, August 30, 2020
In this post I should like to briefly present Alexius Meinong's critique of idealism. Since this critique is not well known among philosophers, I think that it will be most fruitful to present it in a succinct way. But in addition, I should also like to present some counter-arguments to Meinong's critique, with a view ultimately to steelman his critique and to place it on sturdy ground.
I should note at the outset that the idealism with which I will be concerned in this post is metaphysical idealism, which is the view that reality is at bottom mental. This thought is usually cashed out with the expression that reality is essentially composed of ideas. Let's get right to it and present the argument in deductive form, before unpacking it in greater detail:
P1: If idealism is true, then everything is an idea.
P2: All ideas are existing objects.
C1: Therefore, if idealism is true, everything is an existent object. (P1,P2)
P3: But some objects do not exist.
C2: Therefore, idealism is not true. (C1,P3)
P1 is just a description of idealism, so it needn't detain us. P2 should be fairly uncontroversial, at least to the philosophical mainstream. It has been standard doctrine throughout the history of the subject that ideas and minds are existing objects. So with both of these premises on board, C1 follows by modus ponens.
P3 is where the trouble lies. For it is a corollary of C1 that such objects as unicorns exist. Now, one might think that this can't be correct, because the idea of a unicorn is surely different than a unicorn itself. But this distinction is not available to the idealist, since he considers all of reality to be composed of ideas. So for the idealist, the idea of a unicorn just is a unicorn. And since the idea of a unicorn is an existent object, a unicorn must also be an existent object.
But surely unicorns don't exist? Maybe some idealists would be willing to bite the bullet on this and say that unicorns actually do exist. But there is a further trouble in store. For we also have an idea of the non self-identical round square, which is an existing object. But like we said earlier, the idea of the non self-identical round square just is the non self-identical round square. So, under idealism, the non self-identical round square exists. But it is a truth of reason that such an object cannot possibly exist. Therefore, C2 follows by modus tollens.
Now, of course, there is a way for the idealist to counter this argument: all he need affirm is that all ideas are nonexistent objects. Then he would be in no danger of affirming the existence of such preposterous objects as the non self-identical round square. It would now appear that the idealist is on safe ground.
But this is only an appearance; for the above argument need only be slightly tweaked to deal with this new variant of idealism. To wit:
P1: If idealism is true, then everything is an idea.
P2: All ideas are nonexistent objects.
C1: Therefore, if idealism is true, everything is an nonexistent object. (P1,P2)
P3: But some objects do exist.
C2: Therefore, idealism is not true. (C1,P3)
I don't think we need to spend too much time exploring how this new argument is supposed to work. For under the new variant of idealism, such commonplace objects as trees, dogs, and chairs would be considered as nonexistent objects. This might not seem objectionable to one with a proclivity towards mereological nihilism, but even the mereological nihilist affirms the existence of fundamental particles. But the idealist of this variety must deny the existence of even these. So it would seem that this new variant of idealism also runs into serious difficulties as well.
But what if the idealist wanted to allay the criticism by distinguishing between existent and nonexistent ideas? This sounds good at a first glance, but this idea also runs into many problems. For one thing: how are we supposed to make this distinction in a reliable way? One possibility is to define those ideas that are actually thought of as the existent objects, and all those ideas that are not thought of as the nonexistent objects. But this runs into two difficulties. Firstly, all the problematic objects mentioned earlier (viz. unicorns, the non self-identical round square, etc.) have actually been thought of by many minds. So these ideas must be counted as existent objects. But quite apart from that, I'm not sure that the notion of an 'idea that no mind has thought of' is even coherent. For surely an idea just is something thought of by a mind; so what sense is there is supposing that some ideas are not thought of by any mind?
Perhaps we might want to effect this partition by holding that all perceptual ideas are existent objects, while all conceptual ideas are nonexistent objects. (Briefly, an idea is said to be perceptual if it is based in some way on the senses; while an idea is said to be conceptual if it is merely based upon the relations of concepts, with no attendant sense-data). But this doesn't work either; for many clearly nonexistent objects stand in perceptual relations. For instance, many people have dreams or hallucinations of such fantastic beasts as unicorns, and I would not be surprised at all if some people have had perceptual experiences of such absurd objects as the non self-identical round square.
So to conclude, it would appear that idealism falls into the same sorts of problems as many of the mainstream metaphysical theories on offer (including empiricism, materialism, phenomenalism, reism, process philosophy, etc.) And that is the trap of reductionism. In other words, all of these philosophies try to reduce the panoply of items in reality down to a single type of item, call it 'X'. But this invariably leads to trouble because X will inevitably have various properties that cannot possibly apply across the board. It is my studied opinion (and I wholeheartedly agree with Meinong on this) that any adequate metaphysical theory will not be reductionist, and will have to account for all the various types of objects in reality.
Wednesday, May 13, 2020
To begin, let us imagine that we have what appears to be a perfectly normal human, let's call him Todd. Todd was born and grew up into adulthood under fairly unremarkable circumstances. But there is one remarkable fact about Todd: he appears to have an indefinite lifespan. More particularly, he remains in a continuous state of early adulthood for centuries or perhaps millennia on end. But current medical science can seem to find no explanation for why he seems to possess eternal youth. Indeed, for all we know, Todd may well be immortal.
On the other hand, it may very well be that Todd will indeed die of 'old age' at some indeterminate point in the future, for he could just be undergoing the exact same aging process, just at a much slower rate than the average human. So, with all the best evidence we have at our disposal, it is indeterminate whether Todd is really immortal.
Now with all that being said, we must ask ourselves: does Todd's life have meaning? Perhaps it is not fair to pose the question in such a broad manner, so let us be more specific. For any age T, where T is an age far greater than the currently longest-lived humans, does Todd's life have meaning at T? Suppose T is 200 years old. Is Todd's life meaningful at T? Maybe the great majority of us would be willing to concede that it is.
So let us now increase T to something like 500 years old. Is Todd's life still meaningful? If the answer is still 'Yes', then let us increase T yet again. If we keep following this procedure, one of either two outcomes will take place: Either we will change our answer to 'No' at some sufficiently high T, or we will always answer 'Yes', no matter how high T is.
Let us consider the first possibility. If we do switch our minds at some sufficiently high T, then we would be suggesting that T is the limit age for a meaningful life. So if Todd is closely approaching T, then he will soon be faced with living an indefinitely long life of complete and utter meaninglessness. What should he do when faced with this information? Should he perhaps take the drastic measure of committing suicide just when T arrives, thereby assuring that he has lived a meaningful life? But how can it be required of someone to commit suicide to ensure that they have lived a meaningful life?
If he doesn't go down this road, then should he just fall into despair and resign himself to a life of meaninglessness, even if there will surely be an endless variety of potential endeavors to which he could apply himself? But the most important question of all here is: how exactly can we non-arbitrarily determine the limit age for a meaningful life?
But let us suppose that we don't go down this road, and that we will always answer 'Yes' when asked if Todd's life is meaningful at any arbitrarily high T. If this is the path we take, then we have literally said nothing else than that an indefinitely long life can be meaningful, and thus we have rejected the notion that death is necessary for a meaningful life. So those who defend the idea that death is needed for a life to be meaningful seem to be stuck between a rock and a hard place.
We might also add that if an indefinitely long life is a meaningless life, then it follows that AI programs cannot have meaningful lives (if we use the term 'life' in this context to mean something like 'existence' or 'conscious experience', since AI programs clearly don't have biological lives). Since AI programs can be implemented on a variety of physical devices and since copies of them can be easily produced, they quite literally have indefinitely long lives. Should we then say that their lives are doomed to be meaningless? But surely AI programs, especially as physically implemented in robots, can engage in a multitude of meaningful endeavors. The simple fact that such meaningful endeavors do not have a foreseeable end doesn't seem at first glance to automatically write off the possibility that their lives can be meaningful.
Somewhat related to this point, if mind uploading becomes a viable possibility in the near future, then human minds will be in the exact same situation as AI programs. For once we have the ability to upload our minds onto computers and to make backup copies of them, then our lives will then be indefinitely long. Should we then say that our lives can only remain meaningful if our minds are encoded on biological brains? But why should biology be so closely connected to a meaningful life?
So to conclude, I don't view the notion that death is necessary for a meaningful life as being self-evident. In essence, what I would like to see in defense of this notion is at least something by way of decent argument. But quite apart from that, I would like to see more discussion on possible ways to live an indefinitely long, yet meaningful life.
Friday, May 1, 2020
The idea comes from the work of Immanuel Kant. To wit, he made two distinctions between a priori vs. a posteriori knowledge, on the one hand, and analytic vs. synthetic judgments, on the other. Simply put, a priori knowledge is any knowledge derived independently of experience. Prime examples of this kind of knowledge would be mathematics and logic. A posteriori knowledge is that which is gained through experience. Obvious examples of this come from the natural sciences.
Analytic judgments are propositions in which the predicate is conceptually contained in the subject. One example of this would be "All 3-dimensional bodies occupy space". If we understand the terms used, then we can see at once that the predicate "occupies space" is part of the meaning of the term "3-dimensional body". It follows at once that in an analytic judgment the predicate does not add any new information. Synthetic judgments are propositions in which the predicate is not conceptually contained in the subject. An example of this would be "All tigers are located on earth." We can see that in a synthetic judgment the predicate does indeed add new information.
Having this in mind, the question of how these two dichotomies are related naturally suggests itself. It seems clear that there are analytic judgments that are known a priori, purely conceptual propositions about the meanings of terms provide an obvious example. Too, it is also equally apparent that there are synthetic judgments that are known a posteriori, with the empirical propositions of the natural sciences being examples of these. But can there be synthetic judgments that are known a priori? That is to say, can there be judgments in which the predicate adds new information, but which can be known independently of experience? As Kant first adumbrated, this question is really the question concerning the possibility of metaphysics in general, since metaphysics proposes to be a purely a priori discipline that provides us with new information about about ultimate reality. (Of course, by 'metaphysics' here I mean metaphysics as first philosophy, and not the new naturalistic metaphysics that is now in vogue).
Now I am not interested for the moment in answering this particular question, so I will just take it as a given that there can be such propositions. What I am interested in exploring is, given that we do have such propositions, what are the various possible grounds for coming to acquire them? In what follows I will attempt to categorize the different possible ways of obtaining this knowledge.
To begin we should note that empirical investigation does not provide a sufficient ground for synthetic a priori propositions, for empirical investigation can only ever provide us with a posteriori knowledge. So too, the characterization postulate does not work either, for this only ever provides us with analytic a priori judgments about the nature of objects. But one obvious way is that which Kant himself provided; namely the transcendental intuitions of sensibility and the categories of understanding. In this way, synthetic a priori judgments are grounded in the very structure of the human mind.
I think another possible ground is the cartesian doctrine of clear and distinct ideas. To wit, we can gain access to synthetic a priori truths through an intuitive grasp of their content; the idea being that we can tell immediately, using nondiscursive methods, that certain synthetic propositions are apodeictic, thus delineating them as synthetic a priori truths. We can appeal to the example of intuitionism in ethics here.
Perhaps another way is the doctrine of mimesis, familiar from Plato's work. Under this doctrine, we first gain knowledge of synthetic a priori truths prior to our births by means of some form of sensuous experience. After our births we retain some faint memory of these experiences, and these can be uncovered through various means (whether that be mystical, rational, or otherwise).
Finally, there is divine revelation. Under this model, a deity or group of deities uses some means or other to directly inform us of synthetic a priori truths, and the very quality of such revelations provides epistemological assurance of their truth. Such revelations can come in a variety of forms, with scriptural inclusion and theophany being obvious examples. The divine revelation approach is also of interest because it provides some very intriguing connections between philosophy and theology.
That's all I have for this post. I just find this a perennially interesting topic, and I wanted to be sure to record my current thinking on the matter. Please be sure to let me know if I have missed out on any other possible methods.
Wednesday, March 25, 2020
Meditations on Dialectical Logic I: Should the Law of Non-Contradiction be a Theorem of Dialectical Logic
The question to consider is whether the Law of Non-Contradiction, hereinafter the (LNC), should be a theorem of a dialectical logic. Before we begin, let us have some preliminary understanding of what a dialectical logic is supposed to be. In what follows, we will understand a dialectical logic to be any logic that is paraconsistent, simply inconsistent, and contradictorial. Allow me to explain what these terms mean:
1. A paraconsistent logic is any logic which does not contain the Spread Rule, viz. A & ~A / B. I prefer to use the term "Spread Rule" here because, as we will see, there are some dialectical logics which include EFQ as a theorem, viz. (A & ~A) -> B. It is quite reasonable to expect a dialectical logic to be paraconsistent, since if it weren't, we would be lead at once to Trivialism.
2. An simply inconsistent logic is one which includes theorems of the form A & ~A. Thus, we might also say that a simply inconsistent logic is one wherein that are theorems which are both true and false at the same time and in the same respect.
3. A contradictorial logic is any inconsistent logic which has the Adjunction rule, viz. A, ~A / A & ~A. This precludes a number of paraconsistent logics, such as non-adjunctive systems and preservationist logic, from being dialectical logics; since these systems only allow for distributive contradictory statements, while simple contradictions on these systems immediately explode.
So, with that being said, which paraconsistent logics can count as dialectical logics? Well, that would be the Logics of Formal Inconsistency (LFIs), the many-valued paraconsistent systems, and the deep relevant logics. Clearly, all of these logics are both paraconsistent and contradictorial, and they all can perfectly well be simply inconsistent. The many-valued paraconsistent systems are simply inconsistent by design (due to the inclusion of a paradoxical truth-value), but we can ensure the simple inconsistency of the LFIs & the deep relevant systems by the inclusion of a determinate contradictory thesis within the axioms, such as p & ~p.
So we have our categorization of dialectical logics, now we need to get clear at what our question exactly is. What do we mean by the LNC? For the purposes of this essay, we will be considering the LNC in its syntactical formulation, i.e. we will be asking the question whether ~(A & ~A) should be a theorem of dialectical logic.
So to begin, let us consider the reasons why someone might think the LNC should not be a theorem. Newton Da Costa, one of the pioneers of paraconsistent logics, included this as one of the adequacy criteria for a dialectical logic. If we're ready to countenance some sentences of the form A & ~A, then it does at first glance seem reasonable to conclude that ~(A & ~A) should therefore not be a part of our dialectical logic. When we begin to dig into the motivation behind this worry, it seems that the operating assumption here is that negation must function radically differently under dialectical logic.
What is more, if we are particularly interested in providing formal analyses of for example Hegelian dialectics, meaning we want to adhere as closely as we can to what the man himself thought, then it might only seem natural that we should reject the LNC as a theorem. For Hegel himself explicitly rejects this principle in the Science of Logic, so shouldn’t a formalized Hegelian dialectical logic also reject the LNC? It is a similar story for trying to formalize Buddhist logic. For, as we have discussed in previous posts, the Catuskoti explicitly rejects both the LNC and the Law of Excluded Middle. So it seems that a dialectical logic without LNC would also be the right tool to use in this scenario as well.
There is also a third argument we can give. Namely, as dialectitians we might be concerned with limiting the amount of contradictions in our theory. For if we do have the LNC as a theorem, then for any contradictory thesis of the form A & ~A, we will have another contradictory thesis of the form (A & ~A)& ~(A & ~A). But since this is a new contradictory thesis, we will have yet another thesis of the form ((A & ~A) & ~(A & ~A)) & ~((A & ~A) & ~(A & ~A)), and so on, ad infinitum. One might find this result objectionable, and thus rejecting the LNC as a theorem would be a natural way to contain it.
Now let us consider the reasons why a dialectician might want to include the LNC in his logic. The first and most obvious reason is that we want to ensure that the contradictions we are making true are actual contradictions; and the best way to do this is to ensure that the negation in our logic is a contradictory forming operator. To make this more concrete, let us consider the familiar example of the square of opposition. Recall that in the traditional square, the diagonal corners form a contradictory relationship, typically explained as the impossibility of the opposite corners having the same truth values in the same way at the same time; or, in symbols, ~(A & ~A). This seems to be as solid an understanding of contradiction as one is going to find. So if we want to include contradictory theses in our system, we had better ensure that such theses really are contradictory.
But there is also a second reason why we should include the LNC as a theorem. Namely, the thought that the inclusion of theses of the form A & ~A in our logic necessitates a rejection of ~(A & ~A) is nothing more than the Consistency Assumption. More precisely, it embodies the belief that if we have accepted a certain thesis A, then we are thereby obliged to reject ~A. But this sort of reasoning is just what we have rejected in formulating a dialectical logic, so it would seem that we have a nice reductio argument on our hands.
Similarly, if we have fully rejected the Consistency Assumption, then we should have no qualms with the infinite number of contradictory theorems that a ground-level contradiction will produce. What we as dialectitians should really be focused on is limiting the spread of explosion, which is already taken care of by the paraconsistent nature of the logic.
This is a complex issue, and I don’t pretend to have resolved it here. But, it is my considered view that an adequate dialectical logic will include the LNC as a theorem. Make no mistake about it, those dialectical logics which do not include the LNC are most interesting indeed and certainly much more adequate than Classical Logic, but in my mind they don’t go far enough.
Monday, March 16, 2020
The theoretical structure of current mainstream philosophizing; namely classical logic combined with the Reference Theory, is seriously inadequate as a basis for animal rights theory. For the primary point of concern in animal rights theory is duly respecting all those items that are the bearers of certain highly intensional mental states; such as preference, sentience, perception, belief, etc. All such phenomena will require worlds analysis for their semantical evaluation.
Classical logic, with its characteristic one-world semantical basis in the Reference Theory (i.e. all semantical evaluation is grounded in reference to existent items in a unique actual world) does not have the tools for such worlds-analysis. Indeed, the extensional and existential basis of classical logic can at best only provide a foundation for animal welfare theory. This is because animal welfare theory, and the hedonistic utilitarian ethic which underlies it, is concerned solely with the reduction of suffering, which can be fully evaluated without worlds-analysis; specifically through verificationist means.
But true animal rights theory, whether it be driven by a deontic or an ideal utilitarian ethic, requires cross-world evaluation. We cannot be satisfied with the crude extensional methods of the animal welfare movement.
It might be thought that a worlds theory such as modal realism might do the trick here. (modal realism, for those who are unaware, is the contemporary version of the atomistic, many worlds doctrine of Democritus and Leucippus). Now even though modal realism is to be much preferred to mainstream theorizing, it too is inadequate for our purposes. For modal realism allows only for a quite restricted class of worlds; namely consistent and complete possible worlds; with such worlds taken to be existent. But the characteristic mental states in animal rights theory are all highly intensional, meaning that the worlds required for semantic evaluation must extend far beyond the possible. Limiting ourselves to the resources of standard modal realism will erase crucial distinctions needed in semantical evaluations. Such distinctions can only be duly accounted for by appealing to inconsistent and incomplete worlds, in addition to radically anarchic open worlds.
So with all that being said, what are our options here? It would seem that there are 3 theories on offer which have the requisite structure to provide a sound foundation for animal rights theory. The first of these is extended modal realism (EMR). EMR is a worlds-theory which adds impossible worlds to the complete and consistent worlds of standard modal realism. And even though this is not really discussed by extended modal realists, one can also add open worlds to the theory as well.
Like standard modal realism, EMR takes these impossible worlds to be existent, democritean aggregates. But most important for our purposes, they have the requisite structure for semantical evaluation of the mental states at issue in animal rights theory. Thus we can indeed use these rather strange Democritean worlds to provide a metaphysical foundation for animal rights theory.
The second option is noneism. Readers of this blog will no doubt be quite aware of what noneism is, but to quickly recap, noneism in this context gives standing to all worlds (possible, impossible, open), but unlike EMR, worlds under noneism are not Democritean aggregates (rather, they are proper objects unto themselves) and they are not taken to be existing objects. Thus, for the noneist, all nonactual worlds are nonexistent.
The third option is trivialism. trivialism is a theory recently propounded by Paul Kabay, but which has roots in some of the pre-socratics, such as Anaxagoras. Trivialism quite simply is the theory that all propositions are true. This works as a foundation for animal rights theory because the trivialist automatically has all the needed worlds machinery at his disposal. Note also that trivialism is more expressive than EMR and noneism, indeed it includes these theories as proper parts (while also not including them at all, as expected).
So, when it comes to providing semantical evaluations for such mental states as preference, sentience, belief and the like, we can go with EMR, noneism, or trivialism. Any among these are certainly adequate for the job at hand. But, and this is the crucial point, determining which of the 3 is the best foundation will not be determined by ethical considerations. Rather, we will need to appeal to outside considerations (such as adequacy to the data, and the standard constraints on theory choice). It is no secret that I fall firmly within the camp of noneism. But those of us doing work in animal rights must make a choice either among these three theories, or something along the same lines. (Or indeed, if we are feeling particularly adventurous, we can try to formulate a completely new theory). But as should be clear, clinging on to mainstream classical theory or to insufficient worlds-theories like modal realism can only lead to failure in the end.
Wednesday, February 19, 2020
As I have adumbrated in previous posts, I am a firm Radical Noneist. Radical Noneism, for those who don't know, is a philosophical theory originally formulated by Richard Sylvan that combines Noneism with Dialetheism. Radical Noneism is the closest thing we have to a true theory of everything, and is certainly superior in many respects to all the other such theories on offer, but the version that Sylvan originally presented was incomplete. This is because Sylvan had never managed to solve the Characterization Problem; which, to put it very briefly, is the problem of how to provide a Characterization Postulate for objects that is both epistemologically adequate, while at the same time avoiding any metaphysically untoward consequences (such as providing Ontological Arguments for the existence of round squares, etc.).
However, Graham Priest in his masterwork Towards Non-Being was finally able to put together the last pieces of the puzzle, for he was able to formulate an Unrestricted Characterization Postulate which solves the Characterization Problem. This was previously thought impossible, hence the reason why the previous Characterization Postulates on offer were all restricted in various ways and appealed to either differences in property-types or differences in predication of properties. Priest, however, was able to provide us with a successful Unrestricted Characterization Postulate by relativizing all characterizing descriptions to worlds.
I think that this was a massive step in the right direction, but I still do not think that Priest has quite gone far enough. The main reason has to do with a central feature of his version of Radical Noneism: namely, existence-entailments. On Priest's Radical Noneist theory, existence-entailments function much like meaning-postulates in Carnap's semantics; namely, from the fact that a certain property or set of properties appears in an objects characterizing description, we can infer that said object must exist (at some world or other). The properties which ground such existence-entailments are what Priest calls "existence-entailing properties". Just which properties are existence-entailing is a question that Priest leaves open (as it is not central in the formal semantics), but he believes that all causal properties are existence-entailing.
My worry here is that the very notion of existence-entailing properties seems very likely to undercut the primary motivation one would have to be a Noneist. This is because it is the Ontological Assumption itself that underlies idea of existence-entailments. So I worry that if we take this feature on-board in our semantics, then we come dangerously close to falling back again into the Reference Theory, which is precisely what Noneism is designed to overthrow.
Indeed, with his notion of existence-entailments, Priest has at least partially rejected the Independence Thesis. To recap, the Independence Thesis is the idea that objects can possess properties independently of their existential status. This gives rise to the key Meinongian claim that nonexistent objects can truly possess properties. Priest indeed thinks that nonexistent objects can possess properties, but he generally relegates these properties to a rather small class; prime examples of these being logical properties, status properties, and being the object of intentional properties.
So, in contradistinction to existence-entailments, I suggest that we follow a proposal by Richard Sylvan in his late essay entitled "Re-Exploring Item Theory". In essence, Sylvan was gesturing towards a theory wherein we have two different Characterization Postulates; namely, a restricted CP for all the actual worlds, and an unrestricted CP for all the worlds in toto. Thus at all the actual worlds, we apply essentially the CP which is restricted to Characterizing Properties, while we continue using the unrestricted CP when considering all the worlds as a whole. In effect, this gives us the best of both worlds; for we can have the power of the unrestricted CP at our disposal, while at the same time holding forth to the Independence Thesis.
So how might this look? I believe the answer is quite simple; namely if we have a description consisting only of characterizing properties, then we can conclude that some object exemplifies all of these properties in some actual world. But, if a description contains some non-characterizing properties, then we conclude that some object possess all of these properties in some non-actual world.
The idea is simple enough, but one might worry about a possible issue regarding identity. For imagine that we have the following description, viz. "The existent round square". It follows on this theory that some object possesses all of these properties in some non-actual world. But what properties does this object possess in all the actual worlds? For we also have another description to consider; namely "The round square". On our theory it follows that some object possesses these properties in some actual world. So what should we say about the existent round square in this circumstance?
Well, there are at least two moves we can make here. Firstly, we could just use some kind of variable-domain semantics, and say that the existent round square is not in the domains of any of the actual worlds. This would no doubt solve the problem, but one might legitimately wonder why nonexistent objects should not appear in the domains of actual worlds. Thus, we can also make another move; namely, we can hold on to a constant-domain semantics, but we can affirm that, if an object's concomitant description contains non-characterizing predicates, then the object possesses only the characterizing predicates in said description at actual worlds. More concretely expressed, this would mean that the object described by "The existent round square" possesses only the properties of roundness and squareness at actual worlds.
Now, those who hold to a Neo-Lockean variant of Meinongianism will object to this and claim that this notion makes The Round Square and The Existent Round square identical at all the actual worlds, but Radical Noneism does not define identity on Leibnizian grounds, so this is not a problem for us. Indeed, since The Round Square and The Existent Round Square do not possess the same properties at all the same worlds, they are not identical.
As a final criticism of this view, some might say that the theory propounded lacks the theoretical simplicity of Priest's Noneism, since we have 2 characterization postulates at work here. This is undoubtedly true, but I believe that the loss of simplicity is more than made up for by our adhering to the Independence Postulate, which I take to be an absolutely fundamental logico-metaphysical truth.
So all in all, I believe that the Routleyan Noneist theory we have advocated in this essay will prove to be more fruitful than Priest's Noneism. Indeed, by having 2 characterization postulates at our disposal, the theory is not only more richly expressive, but it has the further benefit of holding onto to the most distinctive Meinongian theses; viz. Unrestricted Characterization, Unrestricted Freedom of Assumption, and a truly substantive Independence Thesis.
I often mention the characterization postulate on this blog, and since it is such an important tool in object theories of all sorts, I figur...
(I would like to briefly mention that in this essay I am concerned specifically with Hedonistic Negative Utilitarianism. To see why this cri...
ANIMAL RIGHTS AND INCREDULOUS STARES In this post, I would like to examine an article from the animal ethics journal called "Between ...
There is a quite prominent picture of Veganism that I think is profoundly mistaken, and this is the idea that the intention behind Veganism ...