Sunday, October 17, 2021

Meinong, Russell, and the Medievals

The Austrian philosopher Alexius Meinong formulated a wide-ranging metaphysical/semantic theory which is intended to address many issues of fundamental importance; particularly issues to do with existence, reference, and intentionality. His theory affirmed a number of theses, some which had antecedents in the history of philosophy, and others which were truly radical. Some of the central theses are: 

Every thought has an object which is the target of that thought.

Objects need not exist.

We can make true claims about nonexistent objects.

An object possesses its characterizing properties irrespective of whether it exists.


Indeed, Meinong's theory contains a number of other distinctive theses, but these are the most important for our purposes here.

In his well-known criticism of Meinong's theory, Bertrand Russell claimed that Meinong lacked a "robust sense of reality". Russell here was drawing upon an old idea that goes all the way back to Parmenides. Put simply, Parmenides claimed that for any true statement we make, the objects which that sentence are about must actually exist. 2 corollaries of this claim are that an object can possess properties only if it is actually existent, and therefore all objects are actual existent objects. Parmenides recognized this fact and did not shirk from making these affirmations.

In the modern guise, these Parmenidean theses give rise to the logical inference known as Existential Generalization (EG). To put it in brief, EG says the following:

If we have a true sentence about an object x, then we can conclude that x actually exists.

In Parmenides' theory and the classical logic which derives from it, EG is universally valid. But it is not so for Meinong's theory. Indeed, for Meinong we can only ever conclude that an object x exists if we have premise which asserts the actual existence of x. 

Russell, and the Parmenidean tradition he is drawing upon, is appealing to an understanding of philosophy which goes back to the very beginnings of the subject; namely that philosophy must deal with what is really real. And indeed, it seems we can only do that by focusing our sights on actual existence

Now with regard to Russell's charge against him, Meinong replied in an equally dismissive manner by accusing Russell of displaying a "prejudice in favor of the actual". For Meinong, it was just obvious that not every object is an actually existent object, and therefore that EG is not universally valid. 

Meinong just like Russell is also drawing upon a time-honored conception of philosophy, which is the notion that philosophy must be absolutely unrestricted in it's application. In effect, no stone must be unturned. Or to put it differently, no object must be off-limits to philosophical analysis. Both of these ideas, viz. that philosophy must be fundamentally concerned with actual existence and that it must not limit the objects to which it can be applied, seem very sensible. But it would appear that they are fundamentally in tension with one another.

As far as the contemporary debate goes, we find ourselves in much the same scenario, with seemingly little hope of finding a way out. But in this post I would like to explore an alternative route we might take; one which is rooted in the work of the medievals. For I think medieval logic affords us a possible middle road between the Russellian and the Meinongian positions, and this might allow the 2 camps to find some common ground.

But before we begin, we need to briefly cover some important points from medieval logical theory. 

In medieval logic, every proposition (i.e. declarative sentence) is composed of 2 terms linked together by a copula. Oversimplifying a bit, a term is a word or phrase which refers to something outside of itself. Take the familiar proposition: Socrates is mortal. Here the two terms are "Socrates" & "mortal", while the copula is the word "is". 

We mentioned how terms refer to something outside of themselves. For the medievals, it was not always strictly determined just what it was that a term referred to. This could change depending upon the proposition it was a part of. And here the medievals came up with a new word, i.e. 'supposition'. Put briefly, supposition is a property of terms which determines the objects to which they refer within the context of a proposition. 

Let's consider an example. Imagine we have the following propositions:


     A. Man is a rational animal

     B. "Man" is three-lettered. 

These propositions both share a common term; namely, 'man'. But upon inspection, it is clear that 'man' in A refers to something very different than what it refers to in B. And thus we say that in A 'man' supposits for the species Homo Sapiens; but in B 'man' supposits for an English word. The other terms in the proposition allow us to determine the object to which the term supposits; for only a species is the kind of object which can be a rational animal. Likewise, only a word is the type of object which can have three letters. Indeed, the familiar notion from grammar school of context clues if helpful here. For if we are unsure which object a term refers to in a proposition, we need only use the clues provided by the rest of the proposition to answer that question.

Now as we can see, the terms in both A and B are linked together by an "is" copula, and this is important. For in medieval logic if we have a proposition in which both terms are linked together by a simple 'is'-copula, then such a proposition can only be true if both terms supposit for actually existing objects. Or to put it more explicitly:


"For any proposition of the form 'X is Y', both X and Y must be actually existing objects if the proposition is to be true."

At first glance it seems as if the medieval logician agrees with Russell, and he is just merely using some novel terminology. But this is not true, for there is another important facet of medieval logical theory; and that is the notion of ampliation. Put simply, ampliation is a process whereby the supposition of terms is expanded in some way or other. Consider the following proposition:


     C. Wooly mammoths were mammals

Clearly C is true, but not because wooly mammoths are actually existing objects. For they are long extinct, and thus they no longer exist. So how can this proposition be true? It is precisely the notion of ampliation which explains how.

Let's analyze the proposition to see how it works. As before, we have 2 terms; this time they are 'wooly mammoths' and 'mortal'. But now we have a new copula, viz. 'were'. This is the crucial difference; for this new copula is telling us that the terms of the proposition can now supposit for not only what actually exists, but also for what did exist in the past. Thus, 'wooly mammoths', as it occurs in C, supposits not for actually existing wooly mammoths (of which there aren't any), but for wooly mammoths that existed in the past. 

That is precisely what ampliation does: it expands the objects for which a term can supposit in a proposition. Can ampliation be pushed further? Yes indeed, for consider this proposition:


     D. Space elevators will be useful tools.

Suppose D is true. This clearly cannot be the case because of any present or past space elevators. Such things do not currently and have never yet existed. Rather, it is true because of the space elevators that will exist in the future. As expected, it is the "will be" copula which is doing the work here. For this new copula ampliates the supposition of the term 'space elevators' to include not only presently and past existing space elevators (of which there aren't any), but also to future space elevators. 

We've covered a lot of ground already, but let's push further. Consider now this proposition:


     E. Dragons can be larger than elephants.

E seems perfectly true. But dragons never have, do not currently, and plausibly never will exist. Nevertheless though, dragons could possibly have existed, and that is precisely the key here. For the 'can be' copula ampliates the supposition of the term to include not only present, past, and future dragons (again, of which there aren't any), but also to dragons which could possibly have existed. 

Now for most of the medievals, this was as far as they were willing to go. In their eyes, terms in a proposition could only supposit for things that are, were, will be, or can be. But a few radicals went even further. For these logicians, terms could potentially supposit for objects which could not possibly exist. 

A favorite example used in this context is a chimera. Today we imagine a chimera to be a type of creature with a lion's head, a goat's body, and a snake's tail. But the medievals had supposed chimeras to be a type of creature which is at one and the same time a lion, a goat, and snake. Clearly, it is impossible for such a creature to exist. Nevertheless, consider the following proposition:


     F. Chimeras are imagined to be monstrous creatures.

That seems perfectly true, but chimeras are necessarily nonexistent. Thus, it would appear that the "are imagined to be" copula allows terms to supposit for impossible objects. Indeed, the few medievals who went this far considered such couplae as "is imagined to be", "is conceived to be", "is understood to be", etc., to similarly allow for terms to supposit for impossible objects.

So now that we understand all of that, how does it apply to the dispute between Russell and Meinong? Well, let's consider what each of them would have said about supposition.

To do this, let us construct a list of copulae:


     1. "is"

     2. "was"

     3. "will be"

     4. "can be", "could be", "may be", etc.

     5. "is imagined to be", "is conceived to be", "is understood to be", etc.

(As a reminder, most medievals would not have recognized type-5 propositions as their own category, and would have instead just subsumed them under type-4 propositions. But we will include this more radical medieval view because it makes it much easier to accommodate Meinong's position).

And let us call a proposition with an "is"-copula a type-1 proposition. And one with a "was"-copula will be called a type-2 proposition; and so on down the line. Assuming that Russell would be comfortable with speaking of supposition (a dubious assumption indeed, but let us make it anyway), what would he say about the supposition of the terms in any given type of proposition? The answer should be clear: he would affirm that the terms for propositions of any of the 5 types must supposit only for actually existing objects. Indeed, that is in fact just a restatement of the rule EG, using medieval logical terminology.

But what would Meinong say? His view is just a radical as Russell's, but from the opposite extreme; for Meinong would say that the terms in  any type of proposition can supposit for any type of object whatsoever. So what this means is that no matter what type of proposition we are considering (even type-1 propositions), its terms could potentially supposit for all sorts of nonexistent and indeed even impossible objects.

As we have seen already, the medievals erected a middle ground between these two extremes. For they believed that the terms in different types of propositions supposit for different types of objects. Let us lay their view out in tabular form like so:


     Type-1 --> what is

     Type-2 --> what is or what was

     Type-3 --> what is, what was, or what will be

     Type-4 --> what is, what was, what will be, or what can be

     Type-5 --> what is, what was, what will be, what can be, or what cannot be

It should be clear now: where both Russell and Meinong don't discriminate among any kind of proposition, the medievals make subtle distinctions between cases. And it would also appear that the medieval's approach to this question is far more commensensical. For it would appear quite improper to arbitrarily restrict all 5 types of propositions to only actual objects; and yet, it might also appear too careless to allow for any kind of proposition, even those of type-1, to apply to even impossible objects. The medieval's view can waylay both of these worries.

And now we can see how the problem mentioned earlier (viz. how can we reconcile the view that philosophy must be concerned with actual existence with the view that philosophy must not limit the objects to which it can be applied) can be resolved. For when philosophers deal with the fundamental nature of the world they will spend most of their time dealing with type-1 propositions, and these, according to the medievals, deal only with actual existence (which satisfies what the Russellians are after). But when philosophers are dealing with questions regarding the limits of the world, the bounds of possibility, or the objects of human thought, they will need to employ propositions of types 2-5. And in these cases they will be dealing with broader classes of objects, up to and including those which could never exist (and thus satisfying the desire of the Meinongians).

So in closing, I should like to say 2 things: firstly, if we limit ourselves to the machinery of contemporary symbolic logics (whether they be classical or non-classical), the Russellian and the Meinongian will always be at loggerheads. But if we appeal to the technical machinery provided by medieval logic, then they can truly find some common ground. Secondly, I hope to have provided a glimpse of how useful the work of the medievals can be in illuminating contemporary problems in logical theory. I sincerely hopes this inspires more people to discover for themselves the marvelous riches of medieval logic.

Sunday, May 23, 2021

Environmentalist Speciesism

In our time of ecological crisis, we see two movements trying to tackle the problem head-on. These are the environmentalist movement and the animal rights movement. Though these movements are certainly not equivalent, they have both seen that something is seriously wrong with the current world order and seek to attack the current problems at their source. This being the case, it might only seem eminently reasonable to propose that both of these movements should work together for a common cause. However, this is mistaken; for as I will attempt to demonstrate, the modern environmentalist movement is speciesist at the core. As vegans, we have a world view which is entirely incompatible with the environmentalist movement, making any prospect of a shared common cause very grim indeed. 

This will surely appear to be a bold claim, so let's start off with a brief lay of the land. By the 'animal rights activists', I mean vegans who are pushing for the end of all animal exploitation. Central pillars of this are opposition to hunting, animal agriculture, the use of animal skins and body parts for human benefit, and animal experimentation. By the 'environmentalist movement', I mean the mainstream activists who are attempting to curtail the causes and effects of climate change and who seek to promote environmentally sustainable ways of living. 

Stated quite plainly, it is my thesis that the animal rights movement and the environmentalist movement are fundamentally irreconcilable because the environmentalist movement supports just those exploitative practices that the animal rights activists are seeking to eliminate.

Let's consider the question of hunting. From the animal rights perspective, all individual sentient beings have a fundamental right to life, and since hunting is a violation of the right to life, hunting is an immoral act. This is standard fare in the animal rights movement, but when we turn to the environmentalist movement, we find another approach entirely. Granted, we are all aware of the work that such notable groups as the Sierra Club  and the World Wildlife Fund have done to combat illegal poaching, but we should not make the mistake of concluding that the environmentalist movement opposes hunting full-stop.

For instance, The Conservation Fund works toward saving open lands for recreational hunting. Furthermore, such notable groups as the Sierra Club and the World Wildlife Fund (WWF) openly promote hunting. The Sierra Club claims "hunting and fishing is defensible only when it is managed in a way that benefits wildlife and ecosystems," while the World Wildlife Fund "accepts or supports hunting in a very limited number of contexts where it is culturally appropriate, legal and effectively regulated, and has demonstrated environmental and community benefits".

We can see therefore that these environmentalist organizations do not believe nonhuman animals have an inalienable right to life, for they are perfectly willing to allow humans to hunt them for food/recreation just so long as this doesn't cause any damaging effects on the environment. This is quite plainly an example of speciesism, since if human interests are permitted to override the interests of nonhuman animals, then humans and nonhuman animals are not on equal footing.

The troubles don't stop there though. Quite apart from supporting hunting, such organizations as the WWF also support animal agriculture, just so long as it is performed 'sustainably' and 'responsibly'. It goes without saying that the animal rights movement is fundamentally opposed to the animal agriculture industry, because it violates numerous inalienable rights, not the least of which are the rights to life and liberty. So for us as vegans and animal rights activists, to suggest that there can exist a form of animal agriculture which is either sustainable or responsible is anathema. 

Furthermore, to suggest that the interests of nonhuman animals can be supplanted by human culinary interests is obviously speciesist. And need it also be said that if it is permitted of us to eat nonhuman animals, this can only be because we have ownership of them? 

From a related point, the vegan movement is wholly opposed as well to the various industries which use animal skins and body parts for practical purposes. Now it is only natural to think the environmentalist movement would also be opposed to this, seeing as there doesn't seem to be any sensible environmental benefit that could arise out of such use. This is quite mistaken though. For a rather shocking example, take a look at this recent initiative by Nova Scotia’s Kejimkujik National Park Seaside. Here we have, in no uncertain terms, an environmentalist organization which is quite willing to use exploit the body parts of an invasive crab species for the purpose of making an environmentally-friend alternative to plastic. Even supposing that following this plan of action would create better outcomes for the environment, it is wholly objectionable from the animal rights perspective, since the interests of these crabs are being violated. 

But these are not at all the worst things the environmentalist movement has done against the interest of animals. For the most damning action the major environmental organizations have done is to directly support animal experimentation. The major culprits in these atrocities are the Environmental Protection Agency (EPA) the WWF, but many other such organizations have also participated. Some experiments that such organizations are involved in include chemical tests, neurotoxicity tests, and endocrinological tests (to name few). These experiments are often directly funded and sometimes even conducted by the organizations themselves. (To research this in further detail, please visit the Animal Ethics webpage on this subject).

It goes without saying that the animal rights movement is fundamentally opposed to animal experimentation. So the fact that the major environmentalist organizations are in support of this practice demonstrates that the two movements are directly opposed. To treat animals as means to an end in experiments is a profound affront to their interests. 

So to conclude, it is my considered view that any collaboration between the animal rights movement and mainstream environmentalism has very grim prospects, and is therefore ultimately undesirable.  But clearly vegans must have a plan to do with the environmental crisis, since merely avoiding the use of animal products is insufficient. In my view, the best option here is the use of advanced technologies both to reduce wild animal suffering and to heal the environment. In this way, we can directly address the ecological crisis while respecting the rights of individual animals. 

Monday, January 18, 2021

On the Characterization Postulate

I often mention the characterization postulate on this blog, and since it is such an important tool in object theories of all sorts, I figure it is time to provide a brief overview of just what it is supposed to be.

So why do object theorists need a characterization postulate (CP)? Well, the answer is because such a postulate is required for the epistemological adequacy of object theory. Existent objects do not pose a problem, because we can discover the properties of these by extensional means, i.e. through the use of empirical evidence. But such a procedure is for the most part unavailable for us when it comes to nonexistent objects. I say "for the most part" because we can indeed discover some properties of nonexistent objects through such means as dreams or hallucinations, but these procedures are not at all exhaustive.

So what we require is a logical, e.g. a priori, means to discern the properties of nonexistent objects, and this is where the CP comes in. In essence, the CP is a logical tool which allows us to do that. To express how important it is to object theory, the CP appears quite early in the process of logical construction. Indeed, once we have added descriptors to zero-order logic we can already bring the CP into play (but we needn't go into the technical details of that here).

But I should note that I have been writing as if the CP is one unique thing. This, however, is untrue; for we have many different CPs. The most natural one is the Unrestricted Characterization Postulate (UCP). This runs as follows:

UCP: An object has exactly those properties it is characterized as having.

This is quite natural and does a lot of work. Indeed, it is surely the first CP that comes to mind for the object theorist, and it is no doubt used in much argument and informal reasoning. But unfortunately, the UCP cannot be true tout court. This is for one very simple but devastating reason: namely, it allows us to prove the existence of any object whatsoever.

Consider the following object: "The existent non self-identical spider-eyed lamb". Let's call this object L. By the UCP, it follows that L is existent. But it is obvious that L is non-existent (it violates the law of identity). Therefore, the UCP is false.

Now one might think we can get around this problem by somehow weakening our logic, by analogy to how we can avoid the paradoxes of naive set theory by weakening the underlying logic. But this option is not available to us, for the problematic consequences of the UCP do not depend upon any axioms or inference rules; rather, they only depend upon the presence of descriptors (e.g. term forming operators like "a", "an", "the", etc.) And since eliminating descriptors from our logic is completely out of the question, we must look elsewhere for solutions.

So it is clear the UCP doesn't work. What is the object theorist to do? Well, he could very well just persevere with the UCP. In effect, he would have to use heuristic rules in order to avoid the untoward consequences. No doubt this can certainly be done (and indeed there is an analogy with how many of the textbooks for classical logic make use of naive set theory, even though naive set theory paired with classical logic leads to triviality), but such a route puts the object theorist on unsure logical footing.

Thus, it would seem that a better option would be to suitably restrict the CP. A radical restriction is what we might call the Existential Characterization Postulate (ECP).
This is as follows:

ECP: If an object exists, then it has exactly those properties it is characterized as having.

The ECP is surely true and quite unobjectionable. Indeed, it is true under the mainstream philosophical theories such as empiricism, idealism, and materialism. But for a full object theory the ECP will not do. For it is both far too restrictive (in that it tells us nothing about nonexistent objects) and it is technically redundant (since we already have empirical means at our disposal for discerning the properties of existent objects). So we will need to look elsewhere to find a CP that does some real work.

One way to do so is by expanding the ECP to what we might call the Possibilist Characterization Postulate (PCP). This runs as follows:

PCP: If an object is possible, then it has exactly those properties it is characterized as having.

This will no doubt appear quite attractive to philosophers of a rationalist persuasion. But while it might seem to be a real advance upon the ECP (since now we are able to do real work in discerning the properties of nonexistent objects),this is merely illusory. For, in one sense, the PCP is too restrictive; but it another sense, it is far too permissive.

Let us first consider how it is too restrictive, by referring back to our old friend L. What does the PCP tell us about this? Well nothing at all; because L is an impossible object, and the PCP tells us only about possible objects. Now of course, the rationalist object theorist won't actually consider this to be a weakness, since for him no object is impossible. But the advantages that consideration of impossible objects bring (which are too numerous to go into fully here, but they include such benefits as a resolution of the semantic paradoxes) makes this in my opinion an unacceptable stance to take.

Secondly, the PCP is too permissive because it still allows for the unacceptable ontological arguments mentioned earlier, although of course only restricted to possible objects. For the existent golden mountain (call it M) is certainly a possible object. So by the PCP, M exists. Indeed, it seems that something like the PCP is at work in both Descartes' ontological argument and in the principle of plenitude (viz. the notion that every possible object exists).

Of course, we can duly restrict the PCP, leading to what we might call the Qualified Possiblist Characterization Postulate (QPCP), which runs as:

If an object is possible and does not exist, then it has exactly those properties it is characterized as having.

The QPCP certainly gets rid of the untoward ontological consequences of the PCP, but it is still too restrictive. The classical rationalist who wants to avoid the ontological argument and the principle of plenitude will no doubt rest easy with it. But I think we can do better,

Now, instead of restricting the CP by only applying it only to certain types of items (as the previous postulates do), we can restrict it in other ways too. One quite natural way is by applying it only to certain types of properties. A familiar distinction among object theories is that between nuclear and extranuclear properties. In brief, nuclear properties are ordinary properties of individuals. In other words, they are just those features which delineate what we might call the 'nature' or the 'essence' of an object, while extranuclear properties do not. Alternatively, we might say that nuclear properties apply directly to the object, whilst extranuclear properties in some sense depend upon the object's nuclear properties.
Such a distinction may appear ad-hoc to some, but it actually has a clear pedigree within the philosophical tradition; refer to Kant's distinction between determining and non-determining predicates, or to the Frege-Russell distinction between first-level and second-level functions.

Perhaps the simplest way to lay out this distinction is to list some examples. Standard nuclear properties include such garden variety properties as 'red', 'tall', 'kicked', walked', etc. Extranuclear properties include such things as: ontological properties (viz. 'existent', 'nonexistent'), logical properties ('is consistent', 'is inconsistent'), status properties ('is contingent', 'is impossible',) and converse intentional properties ('is thought about by Larry,' 'is dreamed of by Ron',).

With this distinction in mind we can now formulate a Nuclear Characterization Postulate (NCP), delineated as:

An object has only the nuclear properties it is characterized as having.

It is clear that the NCP allows us to completely avoid the problem of being able to simply define objects into existence (since existence is an extranuclear property) and it is also expansive enough to account for impossible objects. So as a theoretical device the NCP is quite attractive, but it does have its own problems. The first problem is that it leads to untoward consequences concerning relations between existent and nonexistent objects. Consider the fact that Sherlock Holmes lives at 221 Baker Street. By the NCP, Holmes inhabits 221 Baker Street. But Baker Street is an existent object, and it was never inhabited by Holmes, since it is verifiable through empirical means that it never contained Sherlock Holmes as a resident.

For a natural way around this difficulty, we can formulate a Qualified Nuclear Characterization Postulate (QNCP), as follows:

An object has only the one-place nuclear properties it is characterized as having.

Naturally, the QNCP requires that we have some means at our disposal to reduce multi-place predicates to one-place properties. There are several ways to do that, and we needn't go into the technicalities here. But suffice it to say, it is clear that if Holmes has the one-place property 'inhabits-221-Baker-Street', it does not follow that 221 Baker Street has the on-place property 'is-inhabited-by-Sherlock-Holmes', since one-place predicates generally do not imply other one-place predicates, unless we have suitable axioms or meaning postulates in place.

But, as should be no surprise by now, there is yet a further problem lurking in the background, and indeed, it's a problem facing all the previous postulates. Namely, how are we to distinguish between such objects as 'the round square' and 'the existent round square'? The QNCP does not tell us whether these characterizing descriptions denote separate objects or one and the same object. One route we can take is to simply delete the extranuclear property of 'existent' from the second characterization, and conclude that both descriptions denote one and the same object.

But we can also avoid the problem by a new and expanded characterization postulate. We might call this the Suppositional Characterization Postulate (SCP). This is as follows:

An object has the one-place nuclear properties it is characterized as having and for every extranuclear predicate P it is characterized as having, it presents itself as having P.

The idea in its fleshed-out form is due to Routley, but it has roots going all the way back to Meinong's notion of "watered-down properties". Essentially, what we are doing here is systematically producing nuclear analogues of extra-nuclear properties. We can easily see how the above problem is then solved: the existent round square presents itself as existing, while the round square does not.

It seems that we might have pushed the characterization postulate as far as it will go. SCP doesn't appear to run into the types of untoward consequences which the previous CPs ran into, and at a first glance it appears that we cannot extend it any further without running into the triviality problem of the UCP. But that is actually not the case, for there is indeed a CP that is equal in scope to the UCP, but which does not run into triviality. This is the Qualified Characterization Postulate (QCP). It runs as follows:

An item has all the properties it is characterized as having at some world or other.

The QCP really does all the work which the UCP tries to do, except that work is made logically tractable through worlds semantics. It is important to note that the worlds in use here are not merely restricted to the possible worlds of modal semantics; rather, the QCP makes full use of ultramodal worlds, such as incomplete, inconsistent, and open worlds. (We could very well restrict it to only possible worlds, and thus we would have a modalized version of the PCP. Jaakko Hintikka seemed to have just such an idea. But I would still say that this is far too restrictive). Note also how it solves the triviality problem:  we can indeed run an ontological argument to prove the existence of any item, But that does not mean we have proven that the item exists at an actual world. Indeed, it might very well exist only at impossible worlds. Thus it would still be nonexistent at actual worlds.

So that is where our journey ends. To be sure, we have skipped over some CPs one can find in the literature; but these are generally quite technical and beyond the scope of this post. But now we face an important question: which CP should we use? Object-theorists have given different answers to this question throughout the centuries. Meinong held to something like the NCP. Neo-Lockean object theories like that of Parsons tend more towards the QNCP. Classical item theory employs the SCP. Priest and Berto's 'Modal Meinongianism' uses the QCP.

But there's no a priori reason why we should only use one CP; for we can indeed use a variety of them, as the circumstances dictate. Indeed, this is the idea behind the pluralized item theory in Routley's later work; i.e. different sorts of CPs apply to different sorts of worlds. For instance, the SCP might apply at actual worlds, the PCP can apply at possible worlds, and the UCP can apply at some impossible worlds (with triviality now not being a problem, since we should expect some impossible worlds to be trivial). In fact under this approach the QCP becomes redundant, seeing as our plurality of CPs can do everything the QCP can. Indeed, it can do even more, since now we can determine the properties nonexistent objects have at actual worlds, a question Modal Meinongianism leaves unanswered (this is why Priest and Berto have to appeal to existence-entailing properties, as we've discussed in a previous post).

So as we can see, the Characterization Postulate is a deep and fascinating aspect of object theories that is worth careful study. There is much more to be said about the topic, but now is as good a stopping point as any.

Sunday, August 30, 2020

Meinong's Critique of Idealism


In this post I should like to briefly present Alexius Meinong's critique of idealism. Since this critique is not well known among philosophers, I think that it will be most fruitful to present it in a succinct way. But in addition, I should also like to present some counter-arguments to Meinong's critique, with a view ultimately to steelman his critique and to place it on sturdy ground.

 I should note at the outset that the idealism with which I will be concerned in this post is metaphysical idealism, which is the view that reality is at bottom mental. This thought is usually cashed out with the expression that reality is essentially composed of ideas. Let's get right to it and present the argument in deductive form, before unpacking it in greater detail:

P1: If idealism is true, then everything is an idea.
P2: All ideas are existing objects.
C1: Therefore, if idealism is true, everything is an existent object. (P1,P2)
P3: But some objects do not exist.
C2: Therefore, idealism is not true. (C1,P3)

P1 is just a description of idealism, so it needn't detain us. P2 should be fairly uncontroversial, at least to the philosophical mainstream. It has been standard doctrine throughout the history of the subject that ideas and minds are existing objects. So with both of these premises on board, C1 follows by modus ponens.

P3 is where the trouble lies. For it is a corollary of C1 that such objects as unicorns exist. Now, one might think that this can't be correct, because the idea of a unicorn is surely different than a unicorn itself. But this distinction is not available to the idealist, since he considers all of reality to be composed of ideas. So for the idealist, the idea of a unicorn just is a unicorn. And since the idea of a unicorn is an existent object, a unicorn must also be an existent object.

But surely unicorns don't exist? Maybe some idealists would be willing to bite the bullet on this and say that unicorns actually do exist. But there is a further trouble in store. For we also have an idea of the non self-identical round square, which is an existing object. But like we said earlier, the idea of the non self-identical round square just is the non self-identical round square. So, under idealism, the non self-identical round square exists. But it is a truth of reason that such an object cannot possibly exist. Therefore, C2 follows by modus tollens.

Now, of course, there is a way for the idealist to counter this argument: all he need affirm is that all ideas are nonexistent objects. Then he would be in no danger of affirming the existence of such preposterous objects as the non self-identical round square. It would now appear that the idealist is on safe ground.

But this is only an appearance; for the above argument need only be slightly tweaked to deal with this new variant of idealism. To wit:

P1: If idealism is true, then everything is an idea.
P2: All ideas are nonexistent objects.
C1: Therefore, if idealism is true, everything is an nonexistent object. (P1,P2)
P3: But some objects do exist.
C2: Therefore, idealism is not true. (C1,P3)

I don't think we need to spend too much time exploring how this new argument is supposed to work. For under the new variant of idealism, such commonplace objects as trees, dogs, and chairs would be considered as nonexistent objects. This might not seem objectionable to one with a proclivity towards mereological nihilism, but even the mereological nihilist affirms the existence of fundamental particles. But the idealist of this variety must deny the existence of even these. So it would seem that this new variant of idealism also runs into serious difficulties as well.

But what if the idealist wanted to allay the criticism by distinguishing between existent and nonexistent ideas? This sounds good at a first glance, but this idea also runs into many problems. For one thing: how are we supposed to make this distinction in a reliable way? One possibility is to define those ideas that are actually thought of as the existent objects, and all those ideas that are not thought of as the nonexistent objects. But this runs into two difficulties. Firstly, all the problematic objects mentioned earlier (viz. unicorns, the non self-identical round square, etc.) have actually been thought of by many minds. So these ideas must be counted as existent objects. But quite apart from that, I'm not sure that the notion of an 'idea that no mind has thought of' is even coherent. For surely an idea just is something thought of by a mind; so what sense is there is supposing that some ideas are not thought of by any mind?

Perhaps we might want to effect this partition by holding that all perceptual ideas are existent objects, while all conceptual ideas are nonexistent objects. (Briefly, an idea is said to be perceptual if it is based in some way on the senses; while an idea is said to be conceptual if it is merely based upon the relations of concepts, with no attendant sense-data). But this doesn't work either; for many clearly nonexistent objects stand in perceptual relations. For instance, many people have dreams or hallucinations of such fantastic beasts as unicorns, and I would not be surprised at all if some people have had perceptual experiences of such absurd objects as the non self-identical round square.

So to conclude, it would appear that idealism falls into the same sorts of problems as many of the mainstream metaphysical theories on offer (including empiricism, materialism, phenomenalism, reism, process philosophy, etc.) And that is the trap of reductionism. In other words, all of these philosophies try to reduce the panoply of items in reality down to a single type of item, call it 'X'. But this invariably leads to trouble because X will inevitably have various properties that cannot possibly apply across the board. It is my studied opinion (and I wholeheartedly agree with Meinong on this) that any adequate metaphysical theory will not be reductionist, and will have to account for all the various types of objects in reality.

Wednesday, May 13, 2020

Reflections on Death, Immortality, and the Meaning of Life

We often hear it said that death plays a central role in providing our lives with meaning. The idea here is that immortality is either a curse of some kind or that a life without death is a life where we have no sense of urgency to complete our personal projects. But quite apart from the fact that there is rarely anything by way of solid argument given in support of this assertion, it leads to some quite unusual consequences. I have thought about a couple of these recently, and I would like to explore this in the following reflections.

To begin, let us imagine that we have what appears to be a perfectly normal human, let's call him Todd. Todd was born and grew up into adulthood under fairly unremarkable circumstances. But there is one remarkable fact about Todd: he appears to have an indefinite lifespan. More particularly, he remains in a continuous state of early adulthood for centuries or perhaps millennia on end. But current medical science can seem to find no explanation for why he seems to possess eternal youth. Indeed, for all we know, Todd may well be immortal.

On the other hand, it may very well be that Todd will indeed die of 'old age' at some indeterminate point in the future, for he could just be undergoing the exact same aging process, just at a much slower rate than the average human. So, with all the best evidence we have at our disposal, it is indeterminate whether Todd is really immortal.

Now with all that being said, we must ask ourselves: does Todd's life have meaning? Perhaps it is not fair to pose the question in such a broad manner, so let us be more specific. For any age T, where T is an age far greater than the currently longest-lived humans, does Todd's life have meaning at T? Suppose T is 200 years old. Is Todd's life meaningful at T? Maybe the great majority of us would be willing to concede that it is.

So let us now increase T to something like 500 years old. Is Todd's life still meaningful? If the answer is still 'Yes', then let us increase T yet again. If we keep following this procedure, one of either two outcomes will take place: Either we will change our answer to 'No' at some sufficiently high T, or we will always answer 'Yes', no matter how high T is.

Let us consider the first possibility. If we do switch our minds at some sufficiently high T, then we would be suggesting that T is the limit age for a meaningful life. So if Todd is closely approaching T, then he will soon be faced with living an indefinitely long life of complete and utter meaninglessness. What should he do when faced with this information? Should he perhaps take the drastic measure of committing suicide just when T arrives, thereby assuring that he has lived a meaningful life? But how can it be required of someone to commit suicide to ensure that they have lived a meaningful life?

If he doesn't go down this road, then should he just fall into despair and resign himself to a life of meaninglessness, even if there will surely be an endless variety of potential endeavors to which he could apply himself? But the most important question of all here is: how exactly can we non-arbitrarily determine the limit age for a meaningful life?

But let us suppose that we don't go down this road, and that we will always answer 'Yes' when asked if Todd's life is meaningful at any arbitrarily high T. If this is the path we take, then we have literally said nothing else than that an indefinitely long life can be meaningful, and thus we have rejected the notion that death is necessary for a meaningful life. So those who defend the idea that death is needed for a life to be meaningful seem to be stuck between a rock and a hard place.

We might also add that if an indefinitely long life is a meaningless life, then it follows that AI programs cannot have meaningful lives (if we use the term 'life' in this context to mean something like 'existence' or 'conscious experience', since AI programs clearly don't have biological lives). Since AI programs can be implemented on a variety of physical devices and since copies of them can be easily produced, they quite literally have indefinitely long lives. Should we then say that their lives are doomed to be meaningless? But surely AI programs, especially as physically implemented in robots, can engage in a multitude of meaningful endeavors. The simple fact that such meaningful endeavors do not have a foreseeable end doesn't seem at first glance to automatically write off the possibility that their lives can be meaningful.

Somewhat related to this point, if mind uploading becomes a viable possibility in the near future, then human minds will be in the exact same situation as AI programs. For once we have the ability to upload our minds onto computers and to make backup copies of them, then our lives will then be indefinitely long. Should we then say that our lives can only remain meaningful if our minds are encoded on biological brains? But why should biology be so closely connected to a meaningful life?

So to conclude, I don't view the notion that death is necessary for a meaningful life as being self-evident. In essence, what I would like to see in defense of this notion is at least something by way of decent argument. But quite apart from that, I would like to see more discussion on possible ways to live an indefinitely long, yet meaningful life. 

Friday, May 1, 2020

Notes on the Synthetic A Priori

I would like to explore some thoughts on the synthetic a priori that I have been having as of late. But before we dive into this, let us first make sure that we understand exactly what synthetic a priori propositions are supposed to be. This is important not only for the purposes of this article, but also because they play such an important role in metaphysics more generally.

The idea comes from the work of Immanuel Kant. To wit, he made two distinctions between a priori vs. a posteriori knowledge, on the one hand, and analytic vs. synthetic judgments, on the other. Simply put, a priori knowledge is any knowledge derived independently of experience. Prime examples of this kind of knowledge would be mathematics and logic. A posteriori knowledge is that which is gained through experience. Obvious examples of this come from the natural sciences.

Analytic judgments are propositions in which the predicate is conceptually contained in the subject. One example of this would be "All 3-dimensional bodies occupy space". If we understand the terms used, then we can see at once that the predicate "occupies space" is part of the meaning of the term "3-dimensional body". It follows at once that in an analytic judgment the predicate does not add any new information. Synthetic judgments are propositions in which the predicate is not conceptually contained in the subject. An example of this would be "All tigers are located on earth." We can see that in a synthetic judgment the predicate does indeed add new information.

Having this in mind, the question of how these two dichotomies are related naturally suggests itself. It seems clear that there are analytic judgments that are known a priori, purely conceptual propositions about the meanings of terms provide an obvious example. Too, it is also equally apparent that there are synthetic judgments that are known a posteriori, with the empirical propositions of the natural sciences being examples of these. But can there be synthetic judgments that are known a priori? That is to say, can there be judgments in which the predicate adds new information, but which can be known independently of experience? As Kant first adumbrated, this question is really the question concerning the possibility of metaphysics in general, since metaphysics proposes to be a purely a priori discipline that provides us with new information about about ultimate reality. (Of course, by 'metaphysics' here I mean metaphysics as first philosophy, and not the new naturalistic metaphysics that is now in vogue).

Now I am not interested for the moment in answering this particular question, so I will just take it as a given that there can be such propositions. What I am interested in exploring is, given that we do have such propositions, what are the various possible grounds for coming to acquire them? In what follows I will attempt to categorize the different possible ways of obtaining this knowledge.

To begin we should note that empirical investigation does not provide a sufficient ground for synthetic a priori propositions, for empirical investigation can only ever provide us with a posteriori knowledge. So too, the characterization postulate does not work either, for this only ever provides us with analytic a priori judgments about the nature of objects. But one obvious way is that which Kant himself provided; namely the transcendental intuitions of sensibility and the categories of understanding. In this way, synthetic a priori judgments are grounded in the very structure of the human mind.

I think another possible ground is the cartesian doctrine of clear and distinct ideas. To wit, we can gain access to synthetic a priori truths through an intuitive grasp of their content; the idea being that we can tell immediately, using nondiscursive methods, that certain synthetic propositions are apodeictic, thus delineating them as synthetic a priori truths. We can appeal to the example of intuitionism in ethics here.

Perhaps another way is the doctrine of mimesis, familiar from Plato's work. Under this doctrine, we first gain knowledge of synthetic a priori truths prior to our births by means of some form of sensuous experience. After our births we retain some faint memory of these experiences, and these can be uncovered through various means (whether that be mystical, rational, or otherwise).

Finally, there is divine revelation. Under this model, a deity or group of deities uses some means or other to directly inform us of synthetic a priori truths, and the very quality of such revelations provides epistemological assurance of their truth. Such revelations can come in a variety of forms, with scriptural inclusion and theophany being obvious examples. The divine revelation approach is also of interest because it provides some very intriguing connections between philosophy and theology.

That's all I have for this post. I just find this a perennially interesting topic, and I wanted to be sure to record my current thinking on the matter. Please be sure to let me know if I have missed out on any other possible methods.

Wednesday, March 25, 2020

Meditations on Dialectical Logic I: Should the Law of Non-Contradiction be a Theorem of Dialectical Logic

(This will be the first in a series of posts which will deal with various aspects of Dialectical Logic).

The question to consider is whether the Law of Non-Contradiction, hereinafter the (LNC), should be a theorem of a dialectical logic. Before we begin, let us have some preliminary understanding of what a dialectical logic is supposed to be. In what follows, we will understand a dialectical logic to be any logic that is paraconsistent, simply inconsistent, and contradictorial. Allow me to explain what these terms mean:

1. A paraconsistent logic is any logic which does not contain the Spread Rule, viz. A & ~A / B. I prefer to use the term "Spread Rule" here because, as we will see, there are some dialectical logics which include EFQ as a theorem, viz. (A & ~A) -> B. It is quite reasonable to expect a dialectical logic to be paraconsistent, since if it weren't, we would be lead at once to Trivialism.

2. An simply inconsistent logic is one which includes theorems of the form A & ~A. Thus, we might also say that a simply inconsistent logic is one wherein that are theorems which are both true and false at the same time and in the same respect. 

3. A contradictorial logic is any inconsistent logic which has the Adjunction rule, viz. A, ~A / A & ~A. This precludes a number of paraconsistent logics, such as non-adjunctive systems and preservationist logic, from being dialectical logics;  since these systems only allow for distributive contradictory statements, while simple contradictions on these systems immediately explode.

So, with that being said, which paraconsistent logics can count as dialectical logics? Well, that would be the Logics of Formal Inconsistency (LFIs), the many-valued paraconsistent systems, and the deep relevant logics. Clearly, all of these logics are both paraconsistent and contradictorial, and they all can perfectly well be simply inconsistent. The many-valued paraconsistent systems are simply inconsistent by design (due to the inclusion of a paradoxical truth-value), but we can ensure the simple inconsistency of the LFIs & the deep relevant systems by the inclusion of a determinate contradictory thesis within the axioms, such as p & ~p. 

So we have our categorization of dialectical logics, now we need to get clear at what our question exactly is. What do we mean by the LNC? For the purposes of this essay, we will be considering the LNC in its syntactical formulation, i.e. we will be asking the question whether ~(A & ~A) should be a theorem of dialectical logic.

So to begin, let us consider the reasons why someone might think the LNC should not be a theorem. Newton Da Costa, one of the pioneers of paraconsistent logics, included this as one of the adequacy criteria for a dialectical logic. If we're ready to countenance some sentences of the form A & ~A, then it does at first glance seem reasonable to conclude that ~(A & ~A) should therefore not be a part of our dialectical logic. When we begin to dig into the motivation behind this worry, it seems that the operating assumption here is that negation must function radically differently under dialectical logic.

What is more, if we are particularly interested in providing formal analyses of for example Hegelian dialectics, meaning we want to adhere as closely as we can to what the man himself thought, then it might only seem natural that we should reject the LNC as a theorem. For Hegel himself explicitly rejects this principle in the Science of Logic, so shouldn’t a formalized Hegelian dialectical logic also reject the LNC? It is a similar story for trying to formalize Buddhist logic. For, as we have discussed in previous posts, the Catuskoti explicitly rejects both the LNC and the Law of Excluded Middle. So it seems that a dialectical logic without LNC would also be the right tool to use in this scenario as well.

There is also a third argument we can give. Namely, as dialectitians we might be concerned with limiting the amount of contradictions in our theory. For if we do have the LNC as a theorem, then for any contradictory thesis of the form A & ~A, we will have another contradictory thesis of the form (A & ~A)& ~(A & ~A). But since this is a new contradictory thesis, we will have yet another thesis of the form ((A & ~A) & ~(A & ~A)) & ~((A & ~A) & ~(A & ~A)), and so on, ad infinitum. One might find this result objectionable, and thus rejecting the LNC as a theorem would be a natural way to contain it.

Now let us consider the reasons why a dialectician might want to include the LNC in his logic. The first and most obvious reason is that we want to ensure that the contradictions we are making true are actual contradictions; and the best way to do this is to ensure that the negation in our logic is a contradictory forming operator. To make this more concrete, let us consider the familiar example of the square of opposition. Recall that in the traditional square, the diagonal corners form a contradictory relationship, typically explained as the impossibility of the opposite corners having the same truth values in the same way at the same time; or, in symbols, ~(A & ~A). This seems to be as solid an understanding of contradiction as one is going to find. So if we want to include contradictory theses in our system, we had better ensure that such theses really are contradictory.

But there is also a second reason why we should include the LNC as a theorem. Namely, the thought that the inclusion of theses of the form A & ~A in our logic necessitates a rejection of ~(A & ~A) is nothing more than the Consistency Assumption. More precisely, it embodies the belief that if we have accepted a certain thesis A, then we are thereby obliged to reject ~A. But this sort of reasoning is just what we have rejected in formulating a dialectical logic, so it would seem that we have a nice reductio argument on our hands.

Similarly, if we have fully rejected the Consistency Assumption, then we should have no qualms with the infinite number of contradictory theorems that a ground-level contradiction will produce. What we as dialectitians should really be focused on is limiting the spread of explosion, which is already taken care of by the paraconsistent nature of the logic.

So with all that being said, which logics among the LFIs, deep relevant systems, and the many-valued paraconsistent systems would fit our criteria for an adequate dialectical logic? Well, the LFIs are can be immediately ruled out, since one of the central features of these logics is that the LNC is not a theorem. On the other hand, the many-valued paraconsistent systems, (LP, RM3, A3) all count as adequate dialectical logics under out criteria. What about the deep relevant systems? There are many such logics, not all of which meet our standards. The crucial feature of the ones that do is the inclusion of LEM (A v ~A) as an axiom, from which the LNC can then be derived.

This is a complex issue, and I don’t pretend to have resolved it here. But, it is my considered view that an adequate dialectical logic will include the LNC as a theorem. Make no mistake about it, those dialectical logics which do not include the LNC are most interesting indeed and certainly much more adequate than Classical Logic, but in my mind they don’t go far enough.

Meinong, Russell, and the Medievals

The Austrian philosopher Alexius Meinong formulated a wide-ranging metaphysical/semantic theory which is intended to address many issues of ...