Requiem for A Tarman

When Thinking of ethics, there’s always a problem that comes up, namely, given all the theories out there, how are we to decide what to do? Do we think of intentions or consequences, or what God wants us to do or duties, ourselves? There’s this book out there called Don’t Sweat the Small Stuff. The point of the book is that we shouldn’t get so wrapped up in the trivial stuff that ultimately means nothing that we miss the real important things in life. Unfortunately for many of us, our lives don’t get beyond the small stuff. This is why, I think, philosophers think up so many strange thought experiments. We get to think up big stuff and bounce it around. But thinking up thought experiments and setting up all the parameters can be quite time consuming. Besides, when you do, there’s always some joker that wants to dispute the circumstances of your thought experiment. Fortunately for most of us, we don’t have to think up anything. That’s what movies are for. There are plenty of philosophers who poo-poo the idea of using popular entertainment as a philosophical tool. The thought is is that nothing of any use comes out of the popular culture. This is simply not true. Whether we watch Disney movies, a buddy road-trip flick, or a romantic comedy, or we spend an evening with Truffaut, movies give us an ample glimpse of how philosophic theories work. Take, for instance, the movie The Terminator. At first glance, you’ve got an action/sci-fi flick. But if you look a little deeper, there’s questions of artificial intelligence, determinism, time and time travel, existential questions concerning the nature of Sarah Connor’s personality (she goes from wimp to tough chick, or is that who she was all along, since Kyle Reece tells her that she’s a fighter?). There’s really alot there — plus, it’s fun to watch. Which can’t be said about watching most philosophy professors giving their lectures. The late Majel Barrett-Roddenberry, wife of Star Trek creator, the late Gene Roddenberry, said that, back in the 1960s, network censorship was so tight that, in order to get his ideas across, her husband had to be subversive. Barrett-Roddenberry said, “censorship was so bad in those days, that if he could take things and switch them around a little bit, and maybe paint somebody green… he could get some of his ideas across”. I’m not saying or even suggesting that philosophers be or are being subversive ( lord knows what that would be like), but by looking at one situation/question/moral dilemma in one context or medium, we can see how it would work in another. That is to say, that watching a movie in which a certain situation takes place, we may be able to apply that fictional situation to real life. Although they are easily dismissed as cinematic schlock, zombie flicks are especially useful in the area of ethics. For instance, we may consider life and death — what does it mean to be “alive”? , what is death?. By seeing how the undead are treated, we might be able to see how we treat those in our own world who are not quite dead or not quite living (people on life-sustaining machines, for instance). We can see how we treat those who are afflicted with certain brain disorders (that may produce mania or violent behavior) by looking at zombies. I was watching this movie called Automaton Transfusion a few days ago. While I was watching, I thought about how people treat the undead and how would we have to treat them if there were a real zombie plague in our world. After spending some time thinking about the question, I came to the answer that how we treat them depends on what kind of zombie we are dealing with. I thought that I would, for the sake of making the whole experiment worth considering in the first place, consider the zombies of George Romero and Dan O’Bannon. First, I agreed (with myself) that I would consider zombies people. This is important, because it may determine whether zombies are morally considerable at all. If a zombie ceases to be a person and is simply nothing more than a rotting trash heap, that ends the experiment pretty much right there. I don’t think that there are too many philosophers that would argue that we have a moral obligation to a pile of garbage. When I startign thinking about it, I almost immediately thought of Peter Singer ( I’m not sure exactly why). Singer takes Jeremy Bentham’s view that the capacity to suffer makes one morally considerable. Bentham writes, ” the question is not Can they reason? nor Can they talk? but, Can they suffer?”Although we know that the dead cannot be reasoned with ( as there are also many living humans that cannot be reasoned with), and that, with a few cinematic exceptions, none talk. But we haven’t made it our habit to determine if they suffer. According to Singer, this capacity is a prerequisite for having interests at all. If, Singer states, an object lacks the capacity for suffering, we need not include it in our consideration. Singer writes, “it would be nonsense to say that it was not in the interests of a stone to be kicked along the road by a schoolboy. A stone does not have interests because it cannot suffer. Nothing that we can do to it could possibly make any difference to its welfare”. If we look at George Romero’s quintology of his “dead” films, we see that zombies are no more than moving meat. They do not feel physically or otherwise. They are nothing more than self-propelled rocks. Using Bentham/Singer’s criteria, we need not consider their welfare. This is in line with Singer’s approach to individuals who are brain-dead or in a persistive vegetative state. People with those conditions are no more than individuals who have no conscious quality of life (oops! let’s clarify things before we go slumming towards Hitlerville). Like a person who has no higher brain function (without any hope of recovery), a zombie does not experience life. If we “kill” either, what “life” are we depriving either of? In this case it may be argued that to kill either would be a better good. We would speak of ending their “suffering”, but the suffering we’re referring to is primarily metaphorical or our own. But what if a zombie could suffer? What do we do then? Do our obligations to them change? Perhaps they might. Dan O’Bannon’s zombies in Return of the Living Dead are not the shambling moving meat of Romero’s films, but zombies who exhibit Robert Fletcher’s “indications of humanhood” (which are self-awareness, self-control, sense of the future, a sense of the past, the capacity to relate to others [if only to eat them], concern for others, communication and curiosity). O’Bannon’s zombies speak, plot, and retain their personality enough to remind their girlfriend that if she loved him that she’d let hime eat her brain. O’Bannon’s zombies, unlike Romero’s, experience pain. In one sequence, a female zombie reveals that there is pain in death. She explains that eating the brains of the living is the only way to end the suffering of death. The secen takes place in a funeral home between Ernie, the mortician, and the female zombie who is strapped to an embalmbing table: Ernie: you eat people. zombie: not people, brains. Ernie: brains only? zombie: yes. Ernie: why? zombie: the pain. Ernie: what about the pain? zombie: the pain of being dead. Ernie: it hurts to be dead. zombie: I can feel myself rot. Ernie: eating brains, how does that make you feel? zombie: it makes the pain go away. For starters, that scene is just plain creepy. Second, it really makes me nervous about dying, because what if that zombie is right and it is painful being dead? But more importantly, does the fact that they suffer now demand that we include their needs among our own? If feeling one’s own flesh rotting is painful (as one may well imagine), then we may be obligated to end that suffering. But wait, the only way to do that is to feed brains to zombies (this is the only way to end their suffering). That spells trouble for us. If we were good utilitarians, and we have as a prerequisite for moral inclusion the capacity to suffer, how do we deal with the needs od a brain-eating zombie? Is this a case where our uttilitarian ethics runs amuck? If logic dictates that the needs of the many outweigh the needs of the few, and we may assume that (at least in some locales) the dead outnumber the living, then we might find ourselves, for the sake of consistency, handing our brains over to the undead. But this already doesn’t sound right. I think back to my ethics class. We had an assignment to find a True Moral Theory. We had as a guide, several “desired” features which included: the theory must not have implausible implications, it must place realistic motivational demands on the agent, and it can’r be self-defeating. When we consider the matter utilitarianally, we find prima facie that we may have to indulge the needs of the zombie. But if we apply our desired features, we find that giving our brians to flesheaters is not only inplausible, there is absolutely no reason for giving up our brains that motivates us to do so. Lastly, by giving the brains of the living to the dead, eventually, the dead (because the dead is an ever-increasing number) will outstrip their food supply. Therefore, doing so is eventually self-defeating. Next, feeding our brains to zombies butts up against something called the sadistic pleasures objection. It goes something like this: a group cannnot achieve its excellence at the expense of another group (especially if that group is smaller). So, let’s say that the main purpose of a zombie is to eat flesh. This, according to Aristotle, is its(a zombie’s) excellence (characteristic function). If we give living brains to the dead, so they can flourish, and since the net pain of the dead outweighs the net pain of the living (remember, the dead outnumber the living), we would be achieving one group’s excellence at the expense of the smaller group. A utilitarian does not ignore the needs of the smaller group, they figure into the greater good as well. This is especially relevant in the fact that a zombie does not need to eat brains to survive. Eating brains merely relieves a bothersome condition. A person zombie can “exist” with pain. A living human, however, cannot live without his brain. Of course, it’s easy to see that living people shouldn’t give up their brains so that zombies can feel better. But in the case of organ transplants or biotechnology the lines may not be so clear. If a good friend need a kidney to survive and I am a match, am I obligated to give my kidney? At what point am I obligated to give up a part of myself to help or save others? Am I obligated at all? Food for thought.

Monsieur Descartes, My Clocks Usually Don’t Bleed: On the Moral Status of Animals

I was listening to a radio show some time ago. The topic was anything in general, but somehow drifted specifically to the subject of animal rights. The host talked to a caller who is dedicated to preventing the mistreatment of dogs in New york (in New York because that’s where she lived). The host spoke some time about PETA (People for the Ethical Treatment of Animals). Now, I like animals, I own one. He’s cute and I like him. And I generally try to treat (most) of my fellow living beings, human and not-so-human, with some degree of respect, but I CANNOT STAND PETA! I’m not saying that there’s anything wrong with any organization that seems to like animals more than they like people. I agree with that sentiment wholeheartedly. There are some animals that are totally more likable than some people. But, whenever I hear one of their non-celebrity spokespeople speaking (speaking would make one a apokesperson, wouldn’t it?), I get the creeps. These people are hiding something. Some really unpleasant agenda that entails breeding more jackrabbits and breeding less people. Which brings me to why I’m writing this in the first place. Despite the fact that I have this blog, I don’t do much in the way of web surfing. Only recently was I made aware that the frontman for my favorite band did some bit for my least favorite animal-crazy organization (that organization being the aformentioned People for the Ethical Treatment of Animals). It seems that there is some sort of market for the wearing of animal furs out there. Since wearing the hides of chinchillas or mink or Siberian white tigers is not so PC — as this activity tends to cause animals to go extinct — the fur marketeers, instead of not selling fur, simply moved on to a more available source of fur — namely cats and dogs. I was totally not aware of this. So, I was trolling the web, and Icame upon this very topic (using cats and dogs for fur) when I was looking for info about the latest nine inch nails cd, The Slip (an outstanding cd, I might add). It seems that Trent Reznor did this thing for PETA about the use of cats and dogs for fur. The first thing I thought wasn’t, ‘wow, this guy is really cool and super compassionate. he cares about our furry friends’. No, it wasn’t at all like that. My immediate reaction was something like, ‘et tu Brute?’. I mean really, don’t you just hate it when someone you dig does something so not diggable? I like his band, and I think he’s hot (I am such a girl), but I SIMPLY CANNOT STAND PETA!!! And really, think about it: if there’s any group of animals that, if we should wear fur, it should be cats and dogs. I can say right now that there are approximately four dogs and an infinte number of stray cats currently running around my neighborhood at this time and all would make fine coats. Especially that mottled cat that has taken to shitting all over my front lawn. Now, let me get one thing straight. I don’t think that people, unless you live somewhere near the North or South Poles, should be in the habit of wearing fur. And, I definitely think that animals shouldn’t have to endure what could only be described as “torture” to get their fur off of their bodies so I can look good ( as if that were possible). But, I confess, I eat meat, I think cows are pretty much useless, and I have at least one pair of shoes with genuine leather uppers, so I realize that I’m not entirely off the hook. And I’m not so dumb to not realize that there’s a slight hypocrisy in saying that wearing animal fur is bad, yet enjoying the taste of said animal’s flesh with cheese and thousand island dressing. Worse yet, I’m not immune from “cuddly animals syndrome” — the tendency of humans to not care one lick about animals who aren’t cute and adorable (but then, we treat people the same way, which explains alot about why I am here writing this post, and not out doing something with other people). The way that I feel — about fur, about my dog, about animals in general, has to do with the way that I feel about whether they do or do not fit into our moral sphere. The question of fur simply put is a question of moral status. When we consider moral status, for humans and animals, we are asking , among many questions, who counts morally and why and who should be included and what justifies the inclusion? So, looking at something close to home, when I think of my dog, I think that he is a being that is worthy of my concern and care. I try to make sure that he is well cared for and that he lives his life as pain free as possible. Why? Why do I feel morally obligated to care for him? First off, he’s not human. Second, he is unable to care for me (at least in a way beyond barking whenever he feels that there is a “danger” — which includes barking at the wind, at birds chirping too loudly outside, at the neighbors getting it and out of their cars…). But, I still care for him. I more than care, I feel that if I did not, I would be in the wrong. Somehow, my dog fits in with all of whom I feel morally obligated to care for. He’s in my moral sphere despite the fact that he is not human. The fact that he is a mere dog is not a difference significant enough to count against him. I see what I’m busy doing here: when I think of why my dog is morally includable, I automatically shoot through a list of criteria for inclusion. I see at the top of my list is the question ‘ is it human?’ . For some, this has been and is a deciding factor for inclusion or exclusion in the moral sphere. So, when I consider my dog, I have in my mind a set of criteria that he must meet for moral inclusion. These criteria include his similarity to humans, whether he can speak or reason, etc. These, when considered, make up a list of morally relevant characteristics. These are the characteristics I will consider when I decide whether another being will or will not be included in my moral sphere. Wait, I’m beginning to jumble my words as well as my thoughts. I’m trying to think up too much at one time. Let me try to write this a little more scholarly. When we consider any being, be it animal or human, we look for any morally relevant characteristics that we’ll consider when we determine whether that being will be included in our moral sphere. These characteristics are morally significant — in that they make or break a being’s inclusion. Historically, the fact that other animals were not human was more than enough for moral exclusion. The Bible (well, actually it was God) gives man dominion of all the animals. The book of Genesis states: Let us make man in our image, after our likeness; and let them have dominion over the fish of the sea, and over the birds if the air, and over the cattle, and over all the earth, and over every creeping thing that creeps upon the earth (Genesis 1:26). The ancient Greeks pretty much held the same view: In his Politics, Aristotle stated that man’s rationality (as opposed to the biblical view that man’s superiority over animals rested in the fact that he was created in God’s image) placed him above all other creatures. Aristotle states, ” all tame animals are better off when they are ruled by man”. Asistotle reasoned that animals were passionate and governed by their urges. An animal that is subject to the whim of his passions cannot, by nature, rule himself and therefore, must be ruled. Since man (and he did mean MAN) was governed by his intellect, he was naturally fit to rule. Aristotle wrote, “the other animals exist for the sake of man, the tame for use and food, the wild… for food and the provision of clothing and various instruments”. Man was supreme, and all other animals (and a fair number of other people) existed merely to serve the needs of Man. The birth of modern philosophy, starting with Descartes, didn’t do much to change the classic attitude. Descartes claimed that animals are (get this) machines. Although animals communicate with people, Descartes wrote, the fact that they lack the capacity for “real speech” indicated that they lacked the intellect that qualified them for inclusion among sentient beings. According to Descartes, the fact that animals lacked the capacity for speech indicated that animals lacked the capacities for “pure thought” and that speech was the only indicator that we could be certain that a creature possesses the capacity for rational thought. Like the ancients, Descartes also believed that animals lack the ability to control of their “natural impulse”. So, for Descartes, the mere machines we call animals were fit for whatever use we saw fit — including nailing them to walls and disecting them. The sounds of their wailing in pain, Descartes claimed, was no different from the twangs and pings made by a clock as one dismantles it. So ignore the shrieking and blood, Rene says. It’s just makes that noise when you unscrew its parts. I’m not kidding. Darwin’s theory of Natural Selection challenged the old ideas of man on top ( I could insert a dirty joke here — I just did), and apart from nature, in favor of a view of man as a part of nature. According to Darwin, man’s intellect made him better at some things (better than animals) but in other capacities, say the capacity for flight, non-human beings had the advantage. For Darwin, “better” did not automatically infer superior (at least in the sense that one animal is superior to all others in all capabilities). Better, according to Darwin, was a matter of successful adaptation to one’s environment, rather than an innate superiority to others beings. In The Descent of Man (1871), Darwin intends to show that “there is no fundamental difference between man and the higher animals in their mental faculties”. As for the “lower” animals, Darwin states that the they “like man, manifestly feel pleasure and pain, happiness and misery”. Darwin writes, “Terror acts in the same manner on them as on us, causing the muscles to tremble, the heart to palpitate, the sphincters to be relaxed, and the hair to stand on end”. Darwin, breaking with the traditional view, further states, ” the difference in mind between man and the higher animals, great as it is, certainly is one of degree and not one of kind… the various emotions and faculties… of which man boasts, may be found in an incipient animal, or even sometimes in a well-developed condition, in the lower animals”. I suppose this is where someone would say Descartes can shove it. According to the Darwinian view, the fact that man possessed speech or even reason gave us no reason to assume, based on those qualities, that man was naturally superior to all other animals. Our ability to speak may be, in the evolutionary sense, a mere result of homo sapiens adaptation to his environment. The fact that other animals lack speech isn’t a sign of inherent inferiority so much as it reflects the possibility that other species did not need speech to successfully adapt to their environment. Our differences are manifestations of adaptations from which we cannot confer moral biological or moral superiority. This, the Darwinian view, is how we look at many animals — that is, for most of us, mere biological differences bear little or no moral significance when we decide who counts for us morally. Wait, that’s not exactly true. We, as humans, tend to be inpressed by the visual. We tend to cast our moral nets over those who seem most like us. We would be less likely to exclude a gorilla who “speaks” using American sign language from our moral sphere than would exclude a honey badger or a totally un-cute animal like a shrew. We still count what can tell us “ow!’. If I step on my dog’s foot, he yipes in pain. If I slam a 6 foot python around, it doesn’t make a sound. And besides, my dog is cuter than a python. My dog can learn to do tricks and has a bit of a personality. From his body language, I can tell when he is hungry (which is all the time), when he needs to go outside and do his business, when he’s upset, or scared… but the python just lies there. I can’t tell anything about him — even if he’s living or not. So, I find myself applying a bit of Cartesian morality when I think of snakes. The snake, from my point of view, cannot articulate anything to me. It lacks any capacity to communicate at all to me. This is a problem. How do we include something that doesn’t seem to interact with us at all? I think that someone would have to be an absolute weirdo to deny the fact that many humans see themselves as something different or apart from other animals. Even though many of us care about animals, we still hold on to the notion that humans (generally) hold a place apart from other beings, and the fact that other animals are not human plays (whether we like it or not) a part in our moral attitudes and practices. Human are special, and that’s that. Enter moral consideration. Peter Singer suggests that we approach our moral attitudes from the view that we consider the needs of other animals when we behave in a manner that will affect other non-human species. The only thing is that we consider their needs equally. Singer’s approach isn’t a claim for equal rights, but a claim for equal moral consideration. In Animal Liberation Singer writes, ” Equal consideration for different beings may lead to different treatment and different rights”. Even if we consider how other animals will be affected by our actions (say, for instance, we’re planning to cut down a section of Brazillian rainforest to raise beef for McDonalds), we might decide that our needs outweigh those of the animal. If we planned to slaughter millions of potentially life-saving tropical plants and animals in order to feed overweight Americans juicy BigMacs, we may, after considering the needs of the rainforest’s flora and fauna, decide that we should get to slashing and burning right away. The point is that we considered the needs of the animals before we killed them and destroyed their home. BigMacs aren’t really all that juicy, come to think of it. Singer says the fact that we (humans) are physically different from other animals gives is no moral justification for simply dismissing the needs and interests of other species. Singer parallels our moral attitudes towards animals to the way that we once thought about race and gender. At one time, one’s gender or race dictated one’s opportunities or treatment. If, for instance, a person were born a woman, she was considered property of her familt and then, if she married, the property of her husband. Her needs and interests were not up for discussion. But, as we’ve become more enlightened, we’ve seen that the lack of a penis does not make one’s status less than an individual who just so happens to have been born with one. And more importantly, it shouldn’t count for unequal moral consideration. Women, like men, feel anger, happiness, fear, pain and suffering equally. And as Darwin (and Bentham) noted, animals also possess the capacity to feel these emotions as well. Bentham says that the capacity to suffer is a vital characteristic for moral consideration. The ability to feel happiness or pain transcends the capacity for language or the ability to perform complex operations like mathematics (thank god for that!). I step on my dog’s foot, and he cries out in pain. Likewise, if I drop Clorox into the eyes of a rabbit, it will probably make some sort of “that hurts” noise. If someone belts me a good one, I’m probably going to tell them that it hurt a little. If someone decides to wrap a guy’s head in towels and pour water on their face to simulate drowning, they’ll probably describe the experience as painful or unpleasant. In all of these situations, each animal, human and non-human experienced pain and indicated it. The fact that If an animal can suffer, Bentham says, means that we cannot ignore the fact that it does. Singer also uses the capacity to suffer as a morally relevant characteristic. Singer states, “if a being suffers there can be no moral justification for refusing to take that into consideration”. But wait, someone says. If I have to consider whether some animal is going to suffer every time I do anything, I won’t do anything because I’ll be too worried about inflicting pain on some gnat or something. This is a concern especially from those who care about the suffering of animals but still want to eat the occasional chicken sandwich. The fear is that we’ll be so worried about doing anything harmful, that everyone will become a vegan, and no one will ever find the cure for cancer. But Singer himself says that suffering isn’t the same as killing. We should remember that equal consideration doesn’t mean equal treatment. For instance, we understand that children lack the same capacity for reason than adults. Let’s say that a 6 year old child had killed his best friend while playing with his mother’s Ginzu knife set. We know that a child does not operate on the same intellectual level as an adult. We may think that he did not fully understand the possibility that plunging a butcher’s knife into the heart of his BFF would result in his friend’s immediate and irreversable death. The fact that the child lacks the capacity for reason does not mean that we throw the kid out to the wolves. In fact, we feel an even greater moral obligation to him because he does not know what he did. We feel that the child deserves the same fair trial and protection under the law as we would extend to any adult. When we decided to charge the little tyke as an adult, we considered the fact that he’s just six year’s old. But, after w considered that fact, we decided that the demonseed needed to fry anyway. Which is where, I think, Singer was going when he said that inflicting suffering on a being isn’t the same as killing it. If there was no other way to cure a certain disease beyond performing animal research, then, in the interest of saving human lives, we might go ahead and do the research. We know that there are speedy and virtually painless ways to slaughter animals ( that is, unless you consider death itself a harm, but that’s a different issue for another philosopher). There are some situations, Singer admits, that we cannot treat animals equally. I appreciate the fact that this is the way that I’m supposed to feel about animals. And if I were a real philosopher who was concerned about appearing to be enlightened, I would probably feel more inclined to travel along the same lines as Singer. But I’m not. I admit that, no matter how hard I try, I cannot divorce myself from the feeling that being human carries more moral weight than some of us believe it does. I reject the idea the humanness is not a morally relevant characteristic — and that different (in the Darwinian sense) does not mean better. The fact that I cannot say what makes humans different does not mean that there is no thing that does. And, because I totally reject the notion of my being a philosopher, I will leave my opinion at that. I got some music I gotta go listen to.

Rage With the Machines

First off, even though I’m a fan of the band (Rage Against the Machine), I still don’t quite get what the name means. I know that there’s “THE MAN” out there, and that I’m supposed to fight against him and his endeavor to bring on the New World Order, but …. wait, is that it? Anyway… I, being the fan of truly hideous entertainment, purchased season 1 of Knight Rider some weeks ago. It sat on my shelf looking at me looking at it, until I finally had some time to take it down and watch. Now, I used to really dig that show when I was a kid, but somehow all of my childhood memories paint things as being so much better than they really were. This show sucks ass! It’s not for the fact that the whole show is built around some dude who drives a talking car– it’s not for the fact that watching David Hasselfoff’s acting is something like getting kneed to the groin or having to eat your dog’s feces. It’s the wardrobe! Really, so far as the 80s went, clothes-wise, well… you know how when you’re talking to someone and they have this totally huge zit on their forehead and no matter how hard you try to pay attention to what they’re saying, you know it just won’t happen? Well, the clothes are like that. Big pimples. Especially “The Hoff”s” eighties poodle -doo. I was so distracted, unfortunately, that I almost missed the point — that is, a major philosophical point — in the show. Which brings me to the whole idea of machines. Former cop, Michael Knight, oops, Michael Long, is transformed into Michael Knight after being shot and left for dead by some shady underworld types. Because Long took a shot to the kisser, he is given a new face (it seems that the didn’t make sure that the original owner of that face was dead, but then, if they did it wouldn’t have allowed Hasselhoff to play his evil doppleganger Garth Knight), and is persuaded by the Knight Foundation to serve their purposes, which is, good for Long, fighting injustice. And more importantly, he’s given the keys to the sweetest, kick-ass black Trans-Am this side of My Mother the Car , the Knight Industries 2000, better known as KITT. The bitchin’ thing about KITT is the fact that he speaks! Yes, Michael Knight’s new “partner” is a talking car. But KITT is no ordinary car, mind you. He’s super fast, he’s a super computer, and he can jump without even so much as making the slightest skid mark upon landing. He’s a super car. KITT (with the voice of William “Mr. Feeney” Daniels), spares no opportunity to remind Michael that, despite his human-like qualities (his ability to converse rationally, his accidental sense of humor, the fact that he’s a smart-ass), that he’s not human. KITT is a car and he knows it. We know that he is a car. Yet, Michael and we, the audience, care for KITT. We worry when he is in danger, and we enjoy his company. To us and to Michael as well, KITT is more than a mere machine. He’s a trusted friend, a part of the family. But, this attitude seems a little odd. Sure, we have no problem including non-humans into our moral sphere — we do it with our pets and our so-workers and classmates, but we often draw a line when it comes to things that are really not human, like machines. But, we still feel a sense of moral attachment to things like KITT. KITT is no ordinary machine. He interacts with us, he seems to care about us, and in turn, we care about him. On more than one occasion, Michael endangers his own life to save his car. Let me repeat that he endangers his own life to save his car. To some viewers, Michael’s act seems a little, well, strange. Afterall, KITT is a car. If he is destroyed, he can be rebuilt. Michael Knight, however, is not rebuildable. If He dies, he stays dead forever. Which begs (in the colloquial, not philosophic sense) us to ask, is treating a machine like KITT, that is is the inclusion of a machine into our moral sphere (to the extent that we are willing to lay our lives on the line for a machine), extending to inclusion of non-humans into our moral sphere too far? Would we be including machines to our own peril? For most of the history of western philosophy, humans sat squarely at the top of all creation. From the Bible to Aristotle’s hierarchy, mankind’s (and more specifically males) natural place was that of ruler. According to Aristotle, animals were mere tools to serve man’s needs. A donkey was there to pull carts, a dog to keep watch over property, and so on. Man was to administer over the lower animals, and animals were to serve man’s will. According to Aristotle, this was because man (and he did mean man) was endowed with a rational mind. Man’s rational mind enabled him to control his lower, animal inpulses and desires (this is also why Aristotle thought that women were not naturally superior to men. Apparently, women have some sort of wandering uterus problem that makes them all crazy-like and unable in any way to control themselves or anything else. Anyone else have the feeling that Aristotle didn’t date much?). Anyway, where was I? Was I anywhere? Oh, yes, modern philospophy (and I mean modern as in 20th century) challenged the old notions of human superiority and brought in the animals. Starting with Darwin and his tehory of natural selection, modern thinkers rethought the idea that the natural order included mankind at its top. Philosophers such as Peter Singer suggested that we expand our moral sphere to include animals, due to the fact that animals, like humans, possess the capacity for suffering. So, my dog may be a mere machine, in the eyes of Descartes, but it is a machine that has the capacity to suffer. So, according to Singer and all the other animal rights loonies, he’s in. But, that’s it. Descartes saw animals such as dogs and cats as mere machines. They serve us just as any artifact that a man creates with his own hands serves his needs. According to Aristotle, artifacts such as art and technology fall outside of nature and are mere tools meant to serve man’s needs. We can see why modern philosophers include animals — they fear, they feel pain, they bleed and suffer as any human. But, KITT is a machine. He knows that he is a machine. But, we see that KITT has something that other machines do not — an ability to interact with humans on a human level. I was watching an episode of Star Trek: TNG not too long ago. The episode had to do with whether Data was a person. In the episode, Captain Picard argued that the argument that Data was not a person because he is a mere machine falls apart in light of the fact that humans, if you look at ourselves from the biological level, are also mere machines. We are machines, just of a different type, Picard says. Instead of wires, we have veins and arteries. Instead of oil and lubricant, we have blood. But if you lay out the systems side by side, they are more the same than different. What made Data human, Picard argued, was not hois physicality, but his psychology. Data was self aware, intelligent, and (perhaps) possessed consciousness. According to Picard, Data was a sentient being who, despite being a machine, had what we would, in any other circumstance, humanity. I think that, if we look at KITT, we might say the same. KITT, like Data, is self aware. He refers to himself as “I”, and is aware of his place in the environment. He is intelligent (more so than any human). As the show progressed, KITT’s character became even more human-like, expressing complex emotions such as anger, sorrow and compassion. KITT, in some instances, is more human than some humans. It is worth noting here that this is also a point expressed by Singer and others. The idea that animals should not be included in our moral sphere because they lack human qualities such as complex human emotions, includes some animals such as chimpanzees (and some dogs and cats), and excludes some people, such as the mentally handicapped persons. This gets us to a very nagging and really hard question: what makes humans unique? The essence of the question is what makes us human? Back in the day, computers were just mere computing machines. They did exactly as their programmers wanted them to do. Unfortunately for us, many modern computers already perform functions that human brains can do. Computers can solve complex problems, and learn, as when in May 1997, the computer Big Blue formulated a winning chess strategy to beat champion chess player, Gary Kasparov. Anyone who has played a video game knows that the computer can be a formidable enemy — especially when you are aware of the fact that, as you play, the computer is learning from your playing style ( and figuring out how to defeat you). The fear that underlies this fact is what Captain Picard asked Dr. Maddox to consider in that episode of Star Trek, which is the possibility that a computer may become conscious of its own existence. Nevermind mere Datas of even a C-3PO, our fear is Wargames‘ “Joshua” or The Terminator’s Skynet — the computer that becomes self aware, figures out what the problem is (read:us), and immediately sets out a plan for total and permanent human destruction. Ok, let’s not go quite that far, but let’s consider the computer that becomes conscious. If it does, then what is it? I think that we can no longer call it a “mere” machine. We often think of ourselves as evolving, namely because we gain knowledge and wisdom as we grow. This idea is often connected to the idea that our conscious minds possess more than our mere physicality. Our minds, being self-aware, possess something transcendent — a soul. Is it possible for a computer to gain a soul? Still, let’s not go there. We can’t even figure out if humans, let alone any car or android has a soul. Think of this as food for thought. Or at worst, a warning of a grim future. But let’s get back to KITT. KITT, although he insists that he does not, has emotions. And for many, the ability to possess and exhibit emotions is enough for inclusion into our moral sphere ( this is why we often give human qualities to our pets). The ability to express even rudimentary emotions is enough for inclusion. But still, for many people, even those who support animal rights, animals are just that — animals. Not human. Although we don’t want animals to suffer, they aren’t us. That is, they aren’t humans. Many humans feel that there is something unique about humans that (still) place us apart (if even only slightly) from other animals. Perhaps what lurks behind our need to separate ourselves from animals (and I’m assuming talking cars) is some sort of Nietzschean fear of losing ourselves morally. It goes, I think, like this: Nietzsche spoke of the degradation of Man by the overtaking of master morality by slave morality. This takeover upsets the natural order and plunges humanity into the abyss (culturally, morally, etc.). Naturally, man lords over animals and machines ( master morality). Our machines are our slaves. The fear is that, if we elevate machines to our level, we lose our place — our place becomes meaningless. If all is equal, then all loses its value! Men are no more than machines. Lowly, crappy machines to be discarded when they lose their usefulness. Worse to come is the venerated machine that gets to dictate the fates of men. It’s like being bossed around by your dog or wife. Totally unnatural! Which reminds me of a scene in Star Wars Episode II: Attack of the Clones. There’s the scene where C-3PO is in the droid factory. He sees the machines making other machines and is disgusted by what he sees. He calls the whole scene “perverse”. 3PO’s attitude reflects the classic philosophical view of the roles of man and machines. Machines are mere tools to be used by me to create, not as devices of creation themselves. A machine who creates, according to this view, would be considered unnatural. If we enter a day when machines have evolved to the point that they are indistinguishable from people, we would have something to worry about — the ursurpation of man’s role. I’m not exactly talking about armies of T-2000s walking around, but there is a fear that, if a machine can do the work faster and more efficiently than any human, then it may be the human who becomes obsolete. As of now, a machine need a human for its creation. Cars may be built by robots in a factory, but if you trace back the line of creation, you’ll run into a human in there somewhere. Many people say that our ability to create is what sets us apart from other animals. A monkey can draw on a canvas or a robot can put a Toyota together at the factory, but real creativity is born in the mind of a human. That’s true right now, but if a machine gains consciousness, we may very well see feeling, creative machines, who create art, not because they downloaded a pattern or calculated what would be aesthetically pleasing, but created a truly inspired piece of art as any human would create. If machines can do everything people do, the real question is what purpose will we serve? This makes me think about that movie Maximum Overdrive. If you haven’t seen this movie, consider yourself lucky. But, long story short, the story involves a machine takeover. The machines literally rise up and turn on their masters. Theres a scene in the movie when this woman is screaming at the big rigs parked outside of a diner “We made You!”. She, I think, expresses the sentiment than many humans would feel towards even the smartest, most human machine. A machine is a product of man. Anything beyond the world of man as creator disturbs the natural order of things. It may “feel” or “create”, it may speak in the very erudite voice of William Daniels, and look like Cherry 2000, but all in all, it’s a machine. It is a tool and should remain so. And this may be fine so long as the most advanced computer still needs a human creator or operator, or if Data or KITT remain anomalies. Throughout the run of Knight Rider, the only other talking car was KARR, the other car that was built by Knight Industries that belonged to that Garth guy that I mentioned earlier. And throughout the run of Star Trek :TNG, there was only Data, his “brother” Lore, his “daughter” Lal (who malfunctioned and had to be dismantled), and Dr. Soong’s ladyfriend who had no idea that she was an android. I forgot her name. Anyway, as Captain Picard said to Maddox, our real problem comes when the smart machine stops being the anomaly. Picard asks Maddox, what if there are a thousand Datas? a million? A million KITTs could be as easily a curse as a blessing. If machines can create, humans may not only end up fighting obsolescence, but also searching for justification. So, in the future, the big question may be answering why we are here. And this may not be too far off. But, perhaps what we should fear isn’t replacement with machines or even searching for justification for the human race. The future may not be one of moral inclusion as merging with machines. Futurist Ray Kurzweil says that there may come a day when computers will surpass us in intelligence and “irrevocably alter what it means to be human” (Rolling Stone issue 1072, “When Man and Machine Merge” by David Kushner). Kurzweil says that superintelligent nanotechnology will eventually merge with people, getting rid of diseases, making us smarter, and storing our memories as well. Kurzweil says that it totally possible that humanity will become obsolete, as people interact and become dependent more and more on machines. If you don’t think we’ve already started, some futurists say, think look at your cell phone, your ipod, pace makers and your dog’s implant meant to identify him if he gets lost. According to some, the merge between man and machine, what they call “The Singularity”, has already begun. However, Kurzweil says that we need not worry about becoming obsolete (although he says it’s not beyond the realm of possibility). Kurzweil says that the future will be ” a human-machine civilization… we’re not obsolescing ourselves — we’re extending ourselves”. There are detractors, however, who believe that “The Singularity” is more science fiction as science probability. Biologist Thomas Ray says that it is unlikely that computers will advance enough to meet the point of “The Singularity”. Philosopher John Searle says (and I love this), “I think the Singularity is demonstrably bullshit… but that doesn’t alter the fact that it’s very thrilling”. So what do we think? What does thnking about KITT or Data or my cell phone’s ring tones mean for us morally? What it means it that we, humans will face a future that will call us to alter the way that we think about life and what it means for something to count as a fellow living being, entitled to all the rights that we feel that we and other beings are entitled to. As the human race evolves, so to does our moral sphere. To many of us, KITT is considerable for inclusion, at the very least. For a few more, he is not only considerable, but fully included as we would include any being that we care about or for whom we feel lorally obligated. The Singularity may be a festering pile of bullshit, but then, when has that ever stopped a thought experiment? Possible worlds, anyone?

Peter Singer wants me to eat people (One small man’s moral obligation to end the suffering of others may indeed be the tastiest solution yet)

I have nothing but bad experiences in Clairmont. Case in point: I was doing this whole philosophy thing. I had somehow been swindled it to it, and had been regretting it ever since. But, as I am as stupid as I am stubborn, I hadn’t chucked it in and gone on my merry way. Which, is as it (now) seems, would have been the better way to go. In addition to being stupid and stubborn, I have this tendency, from time to time, to slide into a severe case of assholism. I tend to do and say things that, in fact, belie my true intentions — which was how I had gotten myself in to the business of philosophy in the first place. I, for some reason, had decided to let other people think that I actually cared, or care, about what’s going on in the world outside of my neighborhood. Don’t get me wrong, I’m actually a very compassionate person — but in a very kind of abstract way. I care in the same way that a greeting card wishes that you recover from your cancer. Sure, the sentiment is there, but it lacks substance. Anyway, I was knee-deep in an assholic fit, meaning that I was really in for convincing people that I was one of those types who cared so compassionately about those who are less fortunate than I. And, it so happened that Peter Singer was to give a speech about moral obligations (or something like that) to the underdeveloped world. And, it was in Clairmont. Great. So… I go, along with a couple of friends, to see Singer speak. Truth be told, I was looking more for a reason to be out of the house than to see some philosopher drone on about why I should feel guilty about living in the best damn country in the whole world ( if you don’t get the sarcasm here, I feel sorry for you). So, we went, and settled down to hear Mr. Singer explain how we (and really, we are morally obligated to do so) can help others who are living in desperate poverty around the globe. Now, I’m no stranger to the occasional conspiracy theory, and Mr. Singer plays a role in them, particularly those dealing with concepts like population control and eugenics. And frankly, it’s not difficult, when you read Singer’s work, to see where someone might interpret his views as such. I’ll say that, after watching a few clips of Singer speak on YouTube, that it’s pretty easy to make a connection between Singer’s philosophy and the New World Order’s ultimate plan for humanity. During his talk, Singer said that our problems (hunger and severe poverty in the underdeveloped world) are not matters of politics or ethics, but technical. It’s a matter of figuring out the right solution. For Singer, it’s removing the incentive for greed. Greed, according to Singer, leads to hoarding, and hoarding leads to dependence of others for others to meet their needs. So, for example, if the guy who lives on the coast puts up a big fence around his property, which includes the shoreline, and because of that fact he is the only one who catches fish, and he choses not to share them, and the only food for miles is fish, then I will enevitably depend on him for my food. Singer’s solution is to get rid of the monetary system and religion. That, according to Singer, will be the trick to feeding the world. All we have to do is do it. (it’s so simple, right?). But, as I was listening, I began to hear the words of Johnathan Swift inside my noggin. What Singer was offering us was a proposal, correct? And if just removing religion and money will start the ethics ball rolling, why end there? I mean, if the solution isn’t moral or political but technical, wouldn’t getting rid of the have-not’s be an even better solution? Techincally speaking, of course. During the summer, my house is floodded with ants. Those nasty black ones that stink when you smash ’em. So, after having box after delicious box of Fruit Loops succumb to the black menace, we decided to nip the problem at the source. I can’t remember exactly what it was that we used (it was hazardous to humans and domestic animals, that much I remember), but the problem soon abated. Now I can eat Fruit Loops. We solved the problem by getting rid of what was taking something away from us — and more specifically– from me (something, I might add, that I had earned and bought with my own money). But, the ants served no purpose to me other than not being a pest. What if they weren’t so unuseful? What if, even in their demise, the ants could serve a purpose? And that is exactly what third world people can do! The way to get rid of greed (because getting rid of god is so much more difficult) is to lop it off at the source. But greed is only manifested at the expense of someone else. That is, it’s difficult to be really greedy if everyone is getting the exact same amount of stuff. If I got five lap dances and the guy next to me got five lap dances, then the other guy would be just as happy as I ( assuming that the quality of the talent was equal). And if we both got six, well, you can say that we were equally greedy, but neither would feel deprived of anything except for maybe a few bills. It’s only after I’m invited to the champange room that the situation becomes unequal and charges of not fairity are leveled against me. Why? Because I got more than I deserved to get. So, what about food? What about water, or money, or whatever natural or unnatural resource we have or can get? Well, technically speaking, there wouldn’t be a problem over lap dances if I were the only gentleman in the club. If the other people who would look at what I’ve got and get jealous were not there, there would be no greed. And if there is no greed, there is no problem. So… when it comes to the underdeveloped world, I say that we stop looking at Swift as satire and seriously take the Swift-Singer proposal as real, workable policy. We should get rid of the underdeveloped world. But you see, people are not ants. They actually can serve a purpose after you’ve ridded yourself of them. I was watching the miniseries “V” awhile back, and Marc Singer (hey! coincidence?) was in the mothership with “Martin” amid the bodies of thousands of people wrapped up and ready to ship to the visitors’ homeworld. The reason why they were doing so, Martin explains, is that their planet was in need of several precious resources. In addition to needing warriors for their great leader’s wars, they also needed water and food — which was where the people come in. They didn’t consider the political or moral implications if their problem, they simply went for the easiest technical solution, which was to head for the planet with six billion food packets ready to be crated up and shipped home. The real bonus is, is if we replace our food with people, we can save all that corn that we won’t be eating for biofuels! I had heard that Singer is a utilitarian, and that a problem with utilitarianism is the fact that sometimes the theory bears out wacky outcomes, like proposing that we eat those who are less fortunate. I say, that’s a problem only if we are looking at the problem morally. It’s the ethics that bind us to outcomes that we may not want. But Singer himself said that the solution isn’t ethical, but technical. And technics need not tie itself to morality. If we want to actually solve our problems, maybe we need to grow beyond our primitive need to check to see if everything we do is alright with some father in the sky. After all, Singer himself said that a way to solve the world’s problems is to get rid of religion. So, no sky father, no moral cop looking over our shoulders. (I know there’s a quote that goes something along the lines of without god, all things are permitted, or something like that). So, following Singer’s guidelines, it seems that really the way to solve the problem technically, is to 1) get rid of the problem, and 2) eat the problem, thus solving our own problems with factory farming and e-coli outbreaks as a result of animal waste run-off. I have a feeling that our animal friends at PETA will not object to this solution, as no animals will abused in the eating of people. So there. I can say, with absolutely no reservations or sense of irony, that Peter Singer wants me to eat the underdeveloped world. I’ll take mine with ranch dressing, if you don’t mind.

On Being Sci-bi: Star Wars, Star Trek and Why Being and Unfeeling Android is Better Than Being a Neurotic ‘Droid

I WAS LISTENING to NPR today. There was this philosopher guy who wrote a book about dolphins and why we should be concerned about them. It reminded me of a class that I had awhile back about moral status.

funny_dolphin_thinks_its_a_shark_poster-reb9cfb853c8a49ebb3edd1690fed43ec_ilb22_324

The class read some Peter Singer.

My opinions about Singer aside, I appreciated his efforts to get us to realize that we humans aren’t the only life on this planet. But, as a human, I still have that problem with taking any life that doesn’t look human seriously.

I mean, I don’t go around smashing hamsters or nailing dogs to walls, but I am willing to admit that I suffer from the all-too-common humans first syndrome.

keep_calm_and_humans_first_by_errrskate151-d3hdl58

As well as having a fondness for philosophy, I also have a fondness for science fiction.

Some people are squarely in a particular camp, meaning that they prefer to watch Star Trek or that they prefer the lightsaber-wielding Jedi of George Lucas’ Star Wars saga, or even, god forbid, they like Farscape.*

7eee91bb1210af276e28d801cbc46e07

YES,  FARSCAPE COSPLAY IS A THING, PEOPLE

I haven’t been one to so narrowly squeeze myself into one category of fan over another, as I have realized that there is some amount of tension between fans of any given particular science fiction program. I hadn’t really had a name for what a person like me is ( except for maybe uberdork), until I listened one morning to a radio show where another multiple program watcher called himself “sci-bi”.

58162018

Being sci-bi meant that he was not going to be labeled. He was not going to be cornered into claiming allegiance to Roddenberry or to Lucas. He was free to appreciate both or either if he so choosed.f342b9e545fa929de6a68e706abc95fc

Which brings me back to the point of all of this. I had this class on moral status. We were discussing the question, “what is a person?”.

The question, it seems, isn’t as easily answered as we may have thought.

If I was to give the standard response, I would say that a person is a human, short and simple.

IMG_20170504_130215

But, it seems, the philosopher is supposed to say that that speciesist response is no longer viable in light of the data that “proves” that we humans aren’t the only intelligent life on the planet.

there’s all this stuff about what constitutes “morally relevant” characteristics that determine whether a being is morally considerable. we can suppose that these same criteria can also be used to determine whether a being is a “person”.

Since I am a fan of science fiction, and I also know that Gene Roddenberry was a nut about putting philosophy in to his Star Trek plotlines, I immediately recognized that my favorite incarnation of Star Trek (TNG, or The Next Generation for those who aren’t in the know) had a plot dealing with that very issue: what is a person?

images (5)

Giddy with pop culture-infused zeal, I emailed my professor, explaining an episode of a TV show that would perfectly depict the conflict of personhood — especially when we consider the personhood of non-humans. I volunteered the episode “Measure of a Man”.

8de361c14586672ec9f0c90ee92c6bd2

Now, I could get into all sorts of really geekified backstory here, but suffice to say that the episode did fit into the class discussion. But then, that got me thinking… As far as science fiction goes, Roddenberry’s characters are more “real”, meaning they are future people who reflect who we are now.

We are supposed to be philosophically challenged or enlightened by them and their actions. We see the people that we are to become when we look into Star Trek and the near-utopian future it presents to us.

I guess, in the long run, that’s the difference between a philosopher and a storyteller.

Being that I am sci-bi, I also appreciate the work of George Lucas’ Star Wars.

Truth be told, I know more about these characters than I know about my own family. I can trace the genealogy of the Solo twins (Now non-canon. Thanks a lot, The Force Awakens!) more readily than I can trace my own heritage.

Seriously.

339115e98e102d91eab8059311a36f2d

Whereas Roddenberry shows us what we might become, Lucas shows us what we want to be.

Lucas shows us heroes and villains that find redemption through the love of our sons.

FB_IMG_1488095115590

Lucas’ focus is on the mythology of mankind. The story where the people come first.

Which is what I’m writing about right now.

I’ll admit I have a sort-of sci-crush on the character “Data” on Star Trek.

Who doesn’t, right?

16077c5f5928e078184da10180ff664f

OK, SO DATA ISN’T EVERYBODY’S CUP OF EARL GREY TEA

Now, my thing is not for Brent Spiner, the actor who plays “Data”, but the character himself (itself?).

I think that the point of the android Data is so that to mirror our own humanity.

star-trek-tng-check-out-my-new-android-phone-data

Because Data is emotionless and always trying to be more human, he reflects our constant struggle to improve ourselves.

In “The Measure of A Man”, Captain Picard advocates on Data’s side when Starfleet wanted to dismantle him and use him for research, arguing that Data was a mere machine but is a person.

Picard argues successfully, that Data possesses the same qualities that we humans say defines us as human, namely, we are self aware, we are sentient, and we are conscious of ourselves and our surroundings.

I guess the philosopher in me would agree wholeheartedly that, despite the fact that Data was not human, he was a person who is entitled to all the rights and privileges that all humans are entitled to.

So far, so good.

But then I was hit by a sudden apprehension — what about a poor droid like C-3PO? If he were the subject of a question of personhood and if he were in Roddenberry’s universe, he would certainly meet Data’s personhood test.

Anyone familiar with 3PO knows that he doesn’t want to die ( he is horrified by being shot in The Empire Strikes Back, and later in the film tells Chewbacca that he doesn’t want to die), he is aware of himself, as we see that he constantly bemoans his “lot in life”, and C-3PO, unlike Data, is emotional.

theworst-625x350

C-3PO can be best described as a “nervous wreck”.

But yet, in George Lucas’ universe, a mere resemblance to humans will do you absolutely no good if you’re a droid.

FB_IMG_1488877473399

A droid can be used, abused, dismantled, sold without consent. Given away to Jabba the Hutt without any consideration of how a droid might feel about the deal, and if it knows too much, its memory will be wiped.

It doesn’t seem to matter to either George Lucas or his characters whether a droid expresses fear or dread at being condemed to the spice mines of Kessel — they ain’t human.

FB_IMG_1482982043384

And that, to the “Star Wars” universe, seems to make all the difference. So what does this mean?

I guess I could say that George Lucas’ movies reflect the old-world mentality that told us that animals didn’t have souls and that they only made noises when you cut them open because that noise was like the pings and pangs of the springs of a clock when it is taken apart.

Lord knows I don’t want to think about the moral status of my Siri.

But then, I could say that we should be more like Captain Picard — ready to defend our non-human friends when their lives are on the line — if only for the fact that they are persons too.

Or, I guess the better way to see all of this is to stop thinking about it all so seriously, and just enjoy the flashing lights and the soaring music.

…and the last two minutes of Rogue One.

star-wars

NERDGASM!!!!!!!!!!

My god, that was incredible.