Sunday, August 30, 2020

A Critique of Idealism

I would like to present a brief argument against idealism. The idealism with which I will be concerned in this post is metaphysical idealism, which is the view that reality is at bottom mental. This thought is usually cashed out with the expression that reality is essentially composed of ideas. Let's get right to it and present the argument in deductive form, before unpacking it in greater detail:

P1: If idealism is true, then everything is an idea.
P2: All ideas are existing objects.
C1: Therefore, if idealism is true, everything is an existent object. (P1,P2)
P3: But some objects do not exist.
C2: Therefore, idealism is not true. (C1,P3)

P1 is just a description of idealism, so it needn't detain us. P2 should be fairly uncontroversial, at least to the philosophical mainstream. It has been standard doctrine throughout the history of the subject that ideas and minds are existing objects. So with both of these premises on board, C1 follows by modus ponens.

P3 is where the trouble lies. For it is a corollary of C1 that such objects as unicorns exist. Now, one might think that this can't be correct, because the idea of a unicorn is surely different than a unicorn itself. But this distinction is not available to the idealist, since he considers all of reality to be composed of ideas. So for the idealist, the idea of a unicorn just is a unicorn. And since the idea of a unicorn is an existent object, a unicorn must also be an existent object.

But surely unicorns don't exist? Maybe some idealists would be willing to bite the bullet on this and say that unicorns actually do exist. But there is a further trouble in store. For we also have an idea of the non self-identical round square, which is an existing object. But like we said earlier, the idea of the non self-identical round square just is the non self-identical round square. So, under idealism, the non self-identical round square exists. But it is a truth of reason that such an object cannot possibly exist. Therefore, C2 follows by modus tollens.

Now, of course, there is a way for the idealist to counter this argument: all he need affirm is that all ideas are nonexistent objects. Then he would be in no danger of affirming the existence of such preposterous objects as the non self-identical round square. It would now appear that the idealist is on safe ground.

But this is only an appearance; for the above argument need only be slightly tweaked to deal with this new variant of idealism. To wit:

P1: If idealism is true, then everything is an idea.
P2: All ideas are nonexistent objects.
C1: Therefore, if idealism is true, everything is an nonexistent object. (P1,P2)
P3: But some objects do exist.
C2: Therefore, idealism is not true. (C1,P3)

I don't think we need to spend too much time exploring how this new argument is supposed to work. For under the new variant of idealism, such commonplace objects as trees, dogs, and chairs would be considered as nonexistent objects. This might not seem objectionable to one with a proclivity towards mereological nihilism, but even the mereological nihilist affirms the existence of fundamental particles. But the idealist of this variety must deny the existence of even these. So it would seem that this new variant of idealism also runs into serious difficulties as well.

But what if the idealist wanted to allay the criticism by distinguishing between existent and nonexistent ideas? This sounds good at a first glance, but this idea also runs into many problems. For one thing: how are we supposed to make this distinction in a reliable way? One possibility is to define those ideas that are actually thought of as the existent objects, and all those ideas that are not thought of as the nonexistent objects. But this runs into two difficulties. Firstly, all the problematic objects mentioned earlier (viz. unicorns, the non self-identical round square, etc.) have actually been thought of by many minds. So these ideas must be counted as existent objects. But quite apart from that, I'm not sure that the notion of an 'idea that no mind has thought of' is even coherent. For surely an idea just is something thought of by a mind; so what sense is there is supposing that some ideas are not thought of by any mind?

Perhaps we might want to effect this partition by holding that all perceptual ideas are existent objects, while all conceptual ideas are nonexistent objects. (Briefly, an idea is said to be perceptual if it is based in some way on the senses; while an idea is said to be conceptual if it is merely based upon the relations of concepts, with no attendant sense-data). But this doesn't work either; for many clearly nonexistent objects stand in perceptual relations. For instance, many people have dreams or hallucinations of such fantastic beasts as unicorns, and I would not be surprised at all if some people have had perceptual experiences of such absurd objects as the non self-identical round square.

So to conclude, it would appear that idealism falls into the same sorts of problems as many of the mainstream metaphysical theories on offer (including empiricism, materialism, phenomenalism, reism, process philosophy, etc.) And that is the trap of reductionism. In other words, all of these philosophies try to reduce the panoply of items in reality down to a single type of item, call it 'X'. But this invariably leads to trouble because X will inevitably have various properties that cannot possibly apply across the board. It is my studied opinion that any adequate metaphysical theory will not be reductionist, and will have to account for all the various types of objects in reality.

Wednesday, May 13, 2020

Reflections on Death, Immortality, and the Meaning of Life

We often hear it said that death plays a central role in providing our lives with meaning. The idea here is that immortality is either a curse of some kind or that a life without death is a life where we have no sense of urgency to complete our personal projects. But quite apart from the fact that there is rarely anything by way of solid argument given in support of this assertion, it leads to some quite unusual consequences. I have thought about a couple of these recently, and I would like to explore this in the following reflections.

To begin, let us imagine that we have what appears to be a perfectly normal human, let's call him Todd. Todd was born and grew up into adulthood under fairly unremarkable circumstances. But there is one remarkable fact about Todd: he appears to have an indefinite lifespan. More particularly, he remains in a continuous state of early adulthood for centuries or perhaps millennia on end. But current medical science can seem to find no explanation for why he seems to possess eternal youth. Indeed, for all we know, Todd may well be immortal.

On the other hand, it may very well be that Todd will indeed die of 'old age' at some indeterminate point in the future, for he could just be undergoing the exact same aging process, just at a much slower rate than the average human. So, with all the best evidence we have at our disposal, it is indeterminate whether Todd is really immortal.

Now with all that being said, we must ask ourselves: does Todd's life have meaning? Perhaps it is not fair to pose the question in such a broad manner, so let us be more specific. For any age T, where T is an age far greater than the currently longest-lived humans, does Todd's life have meaning at T? Suppose T is 200 years old. Is Todd's life meaningful at T? Maybe the great majority of us would be willing to concede that it is.

So let us now increase T to something like 500 years old. Is Todd's life still meaningful? If the answer is still 'Yes', then let us increase T yet again. If we keep following this procedure, one of either two outcomes will take place: Either we will change our answer to 'No' at some sufficiently high T, or we will always answer 'Yes', no matter how high T is.

Let us consider the first possibility. If we do switch our minds at some sufficiently high T, then we would be suggesting that T is the limit age for a meaningful life. So if Todd is closely approaching T, then he will soon be faced with living an indefinitely long life of complete and utter meaninglessness. What should he do when faced with this information? Should he perhaps take the drastic measure of committing suicide just when T arrives, thereby assuring that he has lived a meaningful life? But how can it be required of someone to commit suicide to ensure that they have lived a meaningful life?

If he doesn't go down this road, then should he just fall into despair and resign himself to a life of meaninglessness, even if there will surely be an endless variety of potential endeavors to which he could apply himself? But the most important question of all here is: how exactly can we non-arbitrarily determine the limit age for a meaningful life?

But let us suppose that we don't go down this road, and that we will always answer 'Yes' when asked if Todd's life is meaningful at any arbitrarily high T. If this is the path we take, then we have literally said nothing else than that an indefinitely long life can be meaningful, and thus we have rejected the notion that death is necessary for a meaningful life. So those who defend the idea that death is needed for a life to be meaningful seem to be stuck between a rock and a hard place.

We might also add that if an indefinitely long life is a meaningless life, then it follows that AI programs cannot have meaningful lives (if we use the term 'life' in this context to mean something like 'existence' or 'conscious experience', since AI programs clearly don't have biological lives). Since AI programs can be implemented on a variety of physical devices and since copies of them can be easily produced, they quite literally have indefinitely long lives. Should we then say that their lives are doomed to be meaningless? But surely AI programs, especially as physically implemented in robots, can engage in a multitude of meaningful endeavors. The simple fact that such meaningful endeavors do not have a foreseeable end doesn't seem at first glance to automatically write off the possibility that their lives can be meaningful.

Somewhat related to this point, if mind uploading becomes a viable possibility in the near future, then human minds will be in the exact same situation as AI programs. For once we have the ability to upload our minds onto computers and to make backup copies of them, then our lives will then be indefinitely long. Should we then say that our lives can only remain meaningful if our minds are encoded on biological brains? But why should biology be so closely connected to a meaningful life?

So to conclude, I don't view the notion that death is necessary for a meaningful life as being self-evident. In essence, what I would like to see in defense of this notion is at least something by way of decent argument. But quite apart from that, I would like to see more discussion on possible ways to live an indefinitely long, yet meaningful life. 

Friday, May 1, 2020

Notes on the Synthetic A Priori

I would like to explore some thoughts on the synthetic a priori that I have been having as of late. But before we dive into this, let us first make sure that we understand exactly what synthetic a priori propositions are supposed to be. This is important not only for the purposes of this article, but also because they play such an important role in metaphysics more generally.

The idea comes from the work of Immanuel Kant. To wit, he made two distinctions between a priori vs. a posteriori knowledge, on the one hand, and analytic vs. synthetic judgments, on the other. Simply put, a priori knowledge is any knowledge derived independently of experience. Prime examples of this kind of knowledge would be mathematics and logic. A posteriori knowledge is that which is gained through experience. Obvious examples of this come from the natural sciences.

Analytic judgments are propositions in which the predicate is conceptually contained in the subject. One example of this would be "All 3-dimensional bodies occupy space". If we understand the terms used, then we can see at once that the predicate "occupies space" is part of the meaning of the term "3-dimensional body". It follows at once that in an analytic judgment the predicate does not add any new information. Synthetic judgments are propositions in which the predicate is not conceptually contained in the subject. An example of this would be "All tigers are located on earth." We can see that in a synthetic judgment the predicate does indeed add new information.

Having this in mind, the question of how these two dichotomies are related naturally suggests itself. It seems clear that there are analytic judgments that are known a priori, purely conceptual propositions about the meanings of terms provide an obvious example. Too, it is also equally apparent that there are synthetic judgments that are known a posteriori, with the empirical propositions of the natural sciences being examples of these. But can there be synthetic judgments that are known a priori? That is to say, can there be judgments in which the predicate adds new information, but which can be known independently of experience? As Kant first adumbrated, this question is really the question concerning the possibility of metaphysics in general, since metaphysics proposes to be a purely a priori discipline that provides us with new information about about ultimate reality. (Of course, by 'metaphysics' here I mean metaphysics as first philosophy, and not the new naturalistic metaphysics that is now in vogue).

Now I am not interested for the moment in answering this particular question, so I will just take it as a given that there can be such propositions. What I am interested in exploring is, given that we do have such propositions, what are the various possible grounds for coming to acquire them? In what follows I will attempt to categorize the different possible ways of obtaining this knowledge.

To begin we should note that empirical investigation does not provide a sufficient ground for synthetic a priori propositions, for empirical investigation can only ever provide us with a posteriori knowledge. So too, the characterization postulate does not work either, for this only ever provides us with analytic a priori judgments about the nature of objects. But one obvious way is that which Kant himself provided; namely the transcendental intuitions of sensibility and the categories of understanding. In this way, synthetic a priori judgments are grounded in the very structure of the human mind.

I think another possible ground is the cartesian doctrine of clear and distinct ideas. To wit, we can gain access to synthetic a priori truths through an intuitive grasp of their content; the idea being that we can tell immediately, using nondiscursive methods, that certain synthetic propositions are apodeictic, thus delineating them as synthetic a priori truths. We can appeal to the example of intuitionism in ethics here.

Perhaps another way is the doctrine of mimesis, familiar from Plato's work. Under this doctrine, we first gain knowledge of synthetic a priori truths prior to our births by means of some form of sensuous experience. After our births we retain some faint memory of these experiences, and these can be uncovered through various means (whether that be mystical, rational, or otherwise).

Finally, there is divine revelation. Under this model, a deity or group of deities uses some means or other to directly inform us of synthetic a priori truths, and the very quality of such revelations provides epistemological assurance of their truth. Such revelations can come in a variety of forms, with scriptural inclusion and theophany being obvious examples. The divine revelation approach is also of interest because it provides some very intriguing connections between philosophy and theology.

That's all I have for this post. I just find this a perennially interesting topic, and I wanted to be sure to record my current thinking on the matter. Please be sure to let me know if I have missed out on any other possible methods.

Wednesday, March 25, 2020

Meditations on Dialectical Logic I: Should the Law of Non-Contradiction be a Theorem of Dialectical Logic

(This will be the first in a series of posts which will deal with various aspects of Dialectical Logic).

The question to consider is whether the Law of Non-Contradiction, hereinafter the (LNC), should be a theorem of a dialectical logic. Before we begin, let us have some preliminary understanding of what a dialectical logic is supposed to be. In what follows, we will understand a dialectical logic to be any logic that is paraconsistent, simply inconsistent, and contradictorial. Allow me to explain what these terms mean:

1. A paraconsistent logic is any logic which does not contain the Spread Rule, viz. A & ~A / B. I prefer to use the term "Spread Rule" here because, as we will see, there are some dialectical logics which include EFQ as a theorem, viz. (A & ~A) -> B. It is quite reasonable to expect a dialectical logic to be paraconsistent, since if it weren't, we would be lead at once to Trivialism.

2. An simply inconsistent logic is one which includes theorems of the form A & ~A. Thus, we might also say that a simply inconsistent logic is one wherein that are theorems which are both true and false at the same time and in the same respect.

3. A contradictorial logic is any inconsistent logic which has the Adjunction rule, viz. A, ~A / A & ~A. This precludes a number of paraconsistent logics, such as non-adjunctive systems and preservationist logic, from being dialectical logics;  since these systems only allow for distributive contradictory statements, while simple contradictions on these systems immediately explode.

So, with that being said, which paraconsistent logics can count as dialectical logics? Well, that would be the Logics of Formal Inconsistency (LFI), the many-valued paraconsistent systems, and the Deep Relevant Logics.

So we have our categorization of dialectical logics, now we need to get clear at what our question exactly is. What do we mean by the LNC? For the purposes of this essay, we will be considering the LNC in its syntactical formulation, i.e. we will be asking the question whether ~(A & ~A) should be a theorem of dialectical logic.

So to begin, let us consider the reasons why someone might think the LNC should not be a theorem. Newton Da Costa, one of the pioneers of paraconsistent logics, included this as one of the adequacy criteria for a dialectical logic. If we're ready to countenance some sentences of the form A & ~A, then it does at first glance seem reasonable to conclude that ~(A & ~A) should therefore not be a part of our dialectical logic. When we begin to dig into the motivation behind this worry, it seems that the operating assumption here is that negation must function radically differently under dialectical logic.

What is more, if we are particularly interested in providing formal analyses of for example Hegelian dialectics, meaning we want to adhere as closely as we can to what the man himself thought, then it might only seem natural that we should reject the LNC as a theorem. For Hegel himself explicitly rejects this principle in the Science of Logic, so shouldn’t a formalized Hegelian dialectical logic also reject the LNC? It is a similar story for trying to formalize Buddhist logic. For, as we have discussed in previous posts, the Catuskoti explicitly rejects both the LNC and the Law of Excluded Middle. So it seems that a dialectical logic without LNC would also be the right tool to use in this scenario as well.

There is also a third argument we can give. Namely, as dialectitians we might be concerned with limiting the amount of contradictions in our theory. For if we do have the LNC as a theorem, then for any contradictory thesis of the form A & ~A, we will have another contradictory thesis of the form (A & ~A)& ~(A & ~A). But since this is a new contradictory thesis, we will have yet another thesis of the form ((A & ~A) & ~(A & ~A)) & ~((A & ~A) & ~(A & ~A)), and so on, ad infinitum. One might find this result objectionable, and thus rejecting the LNC as a theorem would be a natural way to contain it.

Now let us consider the reasons why a dialectician might want to include the LNC in his logic. The first and most obvious reason is that we want to ensure that the contradictions we are making true are actual contradictions; and the best way to do this is to ensure that the negation in our logic is a contradictory forming operator. To make this more concrete, let us consider the familiar example of the square of opposition. Recall that in the traditional square, the diagonal corners form a contradictory relationship, typically explained as the impossibility of the opposite corners having the same truth values in the same way at the same time; or, in symbols, ~(A & ~A). This seems to be as solid an understanding of contradiction as one is going to find. So if we want to include contradictory theses in our system, we had better ensure that such theses really are contradictory.

But there is also a second reason why we should include the LNC as a theorem. Namely, the thought that the inclusion of theses of the form A & ~A in our logic necessitates a rejection of ~(A & ~A) is nothing more than the Consistency Assumption. More precisely, it embodies the belief that if we have accepted a certain thesis A, then we are thereby obliged to reject ~A. But this sort of reasoning is just what we have rejected in formulating a dialectical logic, so it would seem that we have a nice reductio argument on our hands.

Similarly, if we have fully rejected the Consistency Assumption, then we should have no qualms with the infinite number of contradictory theorems that a ground-level contradiction will produce. What we as dialectitians should really be focused on is limiting the spread of explosion, which is already taken care of by the paraconsistent nature of the logic.

This is a complex issue, and I don’t pretend to have resolved it here. But, it is my considered view that an adequate dialectical logic will include the LNC as a theorem. Make no mistake about it, those dialectical logics which do not include the LNC are most interesting indeed and certainly much more adequate than Classical Logic, but in my mind they don’t go far enough.

Monday, March 16, 2020

The Logico-Metaphysical Foundations of Animal Rights.

I think it is fairly clear that animal rights theory is in severe need of logico-metaphysical foundations. Too often, we see discussion of this or that ethical argument, but with very little in the way of decent basis for such arguments. If, as vegans, all we can do is advocate for certain ethical positions, but without being able to provide a sound basis for these positions, then it would appear that our efforts lack point entirely. It is with a view to this matter that I look to direct the following reflections.

The theoretical structure of current mainstream philosophizing; namely classical logic combined with the Reference Theory, is seriously inadequate as a basis for animal rights theory. For the primary point of concern in animal rights theory is duly respecting all those items that are the bearers of certain highly intensional mental states; such as preference, sentience, perception, belief, etc. All such phenomena will require worlds analysis for their semantical evaluation.

Classical logic, with its characteristic one-world semantical basis in the Reference Theory (i.e. all semantical evaluation is grounded in reference to existent items in a unique actual world) does not have the tools for such worlds-analysis. Indeed, the extensional and existential basis of classical logic can at best only provide a foundation for animal welfare theory. This is because animal welfare theory, and the hedonistic utilitarian ethic which underlies it, is concerned solely with the reduction of suffering, which can be fully evaluated without worlds-analysis; specifically through verificationist means.

But true animal rights theory, whether it be driven by a deontic or an ideal utilitarian ethic, requires cross-world evaluation. We cannot be satisfied with the crude extensional methods of the animal welfare movement.

It might be thought that a worlds theory such as modal realism might do the trick here. (modal realism, for those who are unaware, is the contemporary version of the atomistic, many worlds doctrine of Democritus and Leucippus). Now even though modal realism is to be much preferred to mainstream theorizing, it too is inadequate for our purposes. For modal realism allows only for a quite restricted class of worlds; namely consistent and complete possible worlds; with such worlds taken to be existent. But the characteristic mental states in animal rights theory are all highly intensional, meaning that the worlds required for semantic evaluation must extend far beyond the possible. Limiting ourselves to the resources of standard modal realism will erase crucial distinctions needed in semantical evaluations. Such distinctions can only be duly accounted for by appealing to inconsistent and incomplete worlds, in addition to radically anarchic open worlds.

So with all that being said, what are our options here? It would seem that there are 3 theories on offer which have the requisite structure to provide a sound foundation for animal rights theory. The first of these is extended modal realism (EMR). EMR is a worlds-theory which adds impossible worlds to the complete and consistent worlds of standard modal realism. And even though this is not really discussed by extended modal realists, one can also add open worlds to the theory as well.

Like standard modal realism, EMR takes these impossible worlds to be existent, democritean aggregates. But most important for our purposes, they have the requisite structure for semantical evaluation of the mental states at issue in animal rights theory. Thus we can indeed use these rather strange Democritean worlds to provide a metaphysical foundation for animal rights theory.

The second option is noneism. Readers of this blog will no doubt be quite aware of what noneism is, but to quickly recap, noneism in this context gives standing to all worlds (possible, impossible, open), but unlike EMR, worlds under noneism are not Democritean aggregates (rather, they are proper objects unto themselves) and they are not taken to be existing objects. Thus, for the noneist, all nonactual worlds are nonexistent.

The third option is trivialism. trivialism is a theory recently propounded by Paul Kabay, but which has roots in some of the pre-socratics, such as Anaxagoras. Trivialism quite simply is the theory that all propositions are true. This works as a foundation for animal rights theory because the trivialist automatically has all the needed worlds machinery at his disposal. Note also that trivialism is more expressive than EMR and noneism, indeed it includes these theories as proper parts (while also not including them at all, as expected).

So, when it comes to providing semantical evaluations for such mental states as preference, sentience, belief and the like, we can go with EMR, noneism, or trivialism. Any among these are certainly adequate for the job at hand. But, and this is the crucial point, determining which of the 3 is the best foundation will not be determined by ethical considerations. Rather, we will need to appeal to outside considerations (such as adequacy to the data, and the standard constraints on theory choice). It is no secret that I fall firmly within the camp of noneism. But those of us doing work in animal rights must make a choice either among these three theories, or something along the same lines. (Or indeed, if we are feeling particularly adventurous, we can try to formulate a completely new theory). But as should be clear, clinging on to mainstream classical theory or to insufficient worlds-theories like modal realism can only lead to failure in the end.

Wednesday, February 19, 2020

On Further Radicalizing Radical Noneism

As I have adumbrated in previous posts, I am a firm Radical Noneist. Radical Noneism, for those who don't know, is a philosophical theory originally formulated by Richard Sylvan that combines Noneism with Dialetheism. Radical Noneism is the closest thing we have to a true theory of everything, and is certainly superior in many respects to all the other such theories on offer, but the version that Sylvan originally presented was incomplete. This is because Sylvan had never managed to solve the Characterization Problem; which, to put it very briefly, is the problem of how to provide a Characterization Postulate for objects that is both epistemologically adequate, while at the same time avoiding any metaphysically untoward consequences (such as providing Ontological Arguments for the existence of round squares, etc.).

However, Graham Priest in his masterwork Towards Non-Being was finally able to put together the last pieces of the puzzle, for he was able to formulate an Unrestricted Characterization Postulate which solves the Characterization Problem. This was previously thought impossible, hence the reason why the previous Characterization Postulates on offer were all restricted in various ways and appealed to either differences in property-types or differences in predication of properties. Priest, however, was able to provide us with a successful Unrestricted Characterization Postulate by relativizing all characterizing descriptions to worlds.

I think that this was a massive step in the right direction, but I still do not think that Priest has quite gone far enough. The main reason has to do with a central feature of his version of Radical Noneism: namely, existence-entailments. On Priest's Radical Noneist theory, existence-entailments function much like meaning-postulates in Carnap's semantics; namely, from the fact that a certain property or set of properties appears in an objects characterizing description, we can infer that said object must exist (at some world or other). The properties which ground such existence-entailments are what Priest calls "existence-entailing properties". Just which properties are existence-entailing is a question that Priest leaves open (as it is not central in the formal semantics), but he believes that all causal properties are existence-entailing.

My worry here is that the very notion of existence-entailing properties seems very likely to undercut the primary motivation one would have to be a Noneist. This is because it is the Ontological Assumption itself that underlies idea of existence-entailments. So I worry that if we take this feature on-board in our semantics, then we come dangerously close to falling back again into the Reference Theory, which is precisely what Noneism is designed to overthrow.

Indeed, with his notion of existence-entailments, Priest has at least partially rejected the Independence Thesis. To recap, the Independence Thesis is the idea that objects can possess properties independently of their existential status. This gives rise to the key Meinongian claim that nonexistent objects can truly possess properties. Priest indeed thinks that nonexistent objects can possess properties, but he generally relegates these properties to a rather small class; prime examples of these being logical properties, status properties, and being the object of intentional properties.

So, in contradistinction to existence-entailments, I suggest that we follow a proposal by Richard Sylvan in his late essay entitled "Re-Exploring Item Theory". In essence, Sylvan was gesturing towards a theory wherein we have two different Characterization Postulates; namely, a restricted CP for all the actual worlds, and an unrestricted CP for all the worlds in toto. Thus at all the actual worlds, we apply essentially the CP which is restricted to Characterizing Properties, while we continue using the unrestricted CP when considering all the worlds as a whole. In effect, this gives us the best of both worlds; for we can have the power of the unrestricted CP at our disposal, while at the same time holding forth to the Independence Thesis.

So how might this look? I believe the answer is quite simple; namely if we have a description consisting only of characterizing properties, then we can conclude that some object exemplifies all of these properties in some actual world. But, if a description contains some non-characterizing properties, then we conclude that some object possess all of these properties in some non-actual world.

The idea is simple enough, but one might worry about a possible issue regarding identity. For imagine that we have the following description, viz. "The existent round square". It follows on this theory that some object possesses all of these properties in some non-actual world. But what properties does this object possess in all the actual worlds? For we also have another description to consider; namely "The round square". On our theory it follows that some object possesses these properties in some actual world. So what should we say about the existent round square in this circumstance?

Well, there are at least two moves we can make here. Firstly, we could just use some kind of variable-domain semantics, and say that the existent round square is not in the domains of any of the actual worlds. This would no doubt solve the problem, but one might legitimately wonder why nonexistent objects should not appear in the domains of actual worlds. Thus, we can also make another move; namely, we can hold on to a constant-domain semantics, but we can affirm that, if an object's concomitant description contains non-characterizing predicates, then the object possesses only the characterizing predicates in said description at actual worlds. More concretely expressed, this would mean that the object described by "The existent round square" possesses only the properties of roundness and squareness at actual worlds.

Now, those who hold to a Neo-Lockean variant of Meinongianism will object to this and claim that this notion makes The Round Square and The Existent Round square identical at all the actual worlds, but Radical Noneism does not define identity on Leibnizian grounds, so this is not a problem for us. Indeed, since The Round Square and The Existent Round Square do not possess the same properties at all the same worlds, they are not identical.

As a final criticism of this view, some might say that the theory propounded lacks the theoretical simplicity of Priest's Noneism, since we have 2 characterization postulates at work here. This is undoubtedly true, but I believe that the loss of simplicity is more than made up for by our adhering to the Independence Postulate, which I take to be an absolutely fundamental logico-metaphysical truth.

So all in all, I believe that the Routleyan Noneist theory we have advocated in this essay will prove to be more fruitful than Priest's Noneism. Indeed, by having 2 characterization postulates at our disposal, the theory is not only more richly expressive, but it has the further benefit of holding onto to the most distinctive Meinongian theses; viz. Unrestricted Characterization, Unrestricted Freedom of Assumption, and a truly substantive Independence Thesis.

Sunday, December 8, 2019

Noneism and Philosophy of Mind

Those who are familiar with the contemporary landscape of Analytic philosophy of mind are all too aware that the central dogma of nearly all the schools of thought is that the mind (in whatever sense that term is to be understood) is said to exist. Nearly all schools of thought in the Philosophy of Mind (with one notable exception) hold steadfastly to this assumption. To be sure, many of them will disagree about the nature of the mind, with the Dualists claiming that it is an immaterial substance, and the Identity Theorists claiming that it is identical to the physical brain, but they will not disagree over the central dogma.

However, one option that has very rarely been canvassed is the notion that the mind does not exist. This view would fit quite nicely within a Noneist metaphysics, and as we have already indicated in our previous posts, Noneism is a powerful theory that provides neat solutions to age old philosophical problems. This being the case, it would only be natural to conclude that Noneism would have something beneficial to say in the Philosophy of Mind. This is what I would like to explore in this post.

More specifically, I am interested in examining the 'Type-D Materialism' alluded to in the excellent paper by Paul Douglas Kabay titled What's it like to be a Zombie? A New Critique of the Conceivability Argument for Dualism To put it simply, Type-D Materialism is the view that the Mind is a nonexistent object, but despite that, the mind can still do things like think, will, have qualitative experience, etc.

I mentioned earlier that one notable school of thought does not accept the central dogma of Philosophy of Mind. These people would be the Eliminative Materialists. Eliminative Materialists hold both that the mind does not exist and that all mentalistic terms are about nothing at all. On the basis of terminology alone, it might appear that Type-D Materialism is just Eliminative Materialism by a different name, but the similarities are actually superficial. While both schools of thought agree that the Mind does not exist, the Eliminative Materialist understands this to mean that the Mind is an illusion, and thus a mere Nothing. Consequently, the Eliminative Materialist proposes that we eliminate all terms having to do with the Mind and with mental processes from our scientific and philosophical lexicons.

The Type-D Materialist, on the other hand, holds that even though the mind does not exist, it is still an object with a certain nature and with well-defined properties. Therefore, the Type-D Materialist is quite comfortable with utilizing folk psychological terminology. Thus, the mind for the Type-D Materialist is a definite Something, even though it does not exist.

Now the thing that made me find Type-D Materialism particularly captivating is the notion adumbrated in that Kabay essay. To wit, Kabay points out that despite the consistent use of the term 'object' within Noneist philosophy, we must not fall into the trap of thinking that this "is to be understood in contrast to 'subject'." Incredibly, this is not something that I had ever considered before, but this result follows quite naturally from a simple consideration of the Characterization Postulate. To wit, say that we are considering some nonexistent object such as the character Abdul Alhazred from the Lovecraft Stories. We know from the stories that Alhazred has the property of "author of the Necronomicon." Since writing a book is an existence-entailing property, it follows that Abdul Alhazred exists in those worlds that realize the Lovecraft Mythos. But quite clearly, Abdul Alhazred is a human, and humans are certainly one type of sentient item. It follows thereby that there is something it is like to be Abdul Alhazred, in just the same way that there is something it is like to be you or I. Thus, even though Abdul Alhazred does not exist, he still has conscious experiences.

But things get really interesting when we consider the fact that in the Lovecraft worlds Abdul Alhazred almost certainly believes that he actually exists. But he would be mistaken in thinking this because the only things that actually exist are the things that exist in our world. And as Kabay aptly points out, if Alhazred can be mistaken about his existential status, how can we be so sure that we actually exist?

What is particularly interesting about Type-D Materialism is how it accounts for the law-like correlation between brain states and qualia. To wit, he appeals quite directly to the Characterization Postulate to solve this problem. I can do no better than to quote him en masse:

"On the version of physicalism that I am advocating, there is a purely physical world that consists, among other things, of our human bodies and brains. But included among the many non-existent items in which physicality is immersed is every possible state of consciousness, every instance of qualia. Some of these are completely disordered and in no way correlate with the brain states of human bodies. Some correlate with physical states other than brain states of humans.  There are, for example, qualia that correlate with neutrino states and others that correlate with the inputs and outputs of bacteria, and still others that do so with chunks of mountain. And there are qualia that correlate with nothing real at all. But among this plethora of non-existent qualia are those that correlate in a very law-like manner with the brain states of humans, and among those is a very small subset that correlate in exactly the manner that ours do: we are they."

In other words, qualia states, since they are properties of minds, must be included in any suitable Characterization Postulate. This being so, it is only bound to be the case that some among these will match up perfectly to the goings-on of our physical world. In my mind, this right here is an astonishing piece of Philosophical reasoning, for in effect what Kabay has done is to provide a sound resolution to the Interaction Problem from a purely Meinongian perspective.

Another thing that is extremely exciting about Type-D Materialism is that it can give a purely rational basis for an Animistic belief system. For if every possible stream of qualia states is represented in some object, then some of those objects will correlate precisely with nonhuman animals and the environment. With this in mind, Type-D Materialism can potentially make some very interesting connections to Pagan and Indigenous traditions.

But even more importantly than that, it can provide a key part of the comprehensive philosophical foundation that veganism desperately needs. For, if animals and the environment are the proper bearers of experiential states, than it seems perfectly sensible to conclude that we have direct duties toward them.

I would strongly encourage you all to read and consider the ideas presented in this article. I truly believe that they might contain the first inklings of a philosophical revolution.

A Critique of Idealism

I would like to present a brief argument against idealism. The idealism with which I will be concerned in this post is metaphysical idealism...