What specific Precepts, coupled with compassion, should be coded into our programs?
Basic research is advancing on built-in ethics for machines. For example, Prof. Joseph Halpern of Cornell University is developing mathematical systems to allow AI agents, such as driverless vehicles, to behave in “moral” ways applying legal and philosophical notions via algorithms that balance actions and risks of harm to humans. When faced with a dilemma, such as the choice between running over a person who has walked onto a roadway vs. hitting two people on a nearby sidewalk, what will the driverless car do? It is a choice that must be resolved spontaneously and without time for a moment’s reflection.
Besides not killing, other precepts to be built in might include the injunction not to steal by taking what is not given (although here too, our robot caretakers will need to decide among conflicting options: Like Jean Valjean of Les Misérables, who stole a loaf of bread to feed his starving family, might a machine-Valjean steal from the automated bake shop to feed its starving owner?) Ethicists and religious teachers today debate the breaking of one precept or tenet to preserve another, but perhaps most would agree that stealing or breaking a lesser rule, such as lying or destroying property, is sometimes necessary to preserve a higher precept such as preserving sentient life.
In general, we do not want our AI to be taking our stuff without asking, then telling us fibs about it. Thus, I want my mechanisms to be honest with me, and the Precept to “avoid telling lies” would cover that. While governments and business will need to keep secrets, as always, I do not want to be lied to absent urgent need. Thus, AI should not participate in harmful lying. Today, deepfake programs are creating false pictures and videos in which living people appear to say and do things that they never have, sometimes quite embarrassing or scandalous things. World leaders could be made to appear to take acts or say words, all computer generated, sufficient to start wars. When the faking is pushed out onto online media, false rumors and news stories spread like wildfire. AI can help us detect fakes, but only if the fakes designed by other AI are not so detailed and flawless that they become impossible to detect. At the same time, we need to preserve the ability of animators and artists to imaginatively create fake people saying fake things simply for art, satire or entertainment’s sake. For this reason, design software that is “too good” should be made illegal, prohibited by laws and treaties, seized if found and smashed like barrels of bootleg whiskey. Bad actors will do so anyway. It is quite a dilemma!
Honesty is the best policy. On the other hand, I would want my AI to fib about my location if hiding me from Imperial Stormtroopers at the space station door.
AI should be instructed not to gossip, in line with the Buddhist concept of “Right Speech.” Perhaps certain topics, such as one’s sexuality (short of child abuse and other violent, non-consensual behavior), personal hobbies, and religious views can be banned as areas falling outside data collection and public discussion (unless the subject human individual voluntary chooses to go public) the rule being something like “If you don’t want your sex life discussed on social media, then don’t discuss your sex life on social media.” AI would not collect data, and others could not comment on your private habits, until you choose to raise the topic online, or engage in clearly criminal behavior, and that would require algorithms which know how to distinguish private from public, right from wrong. Even so, if our toilets or beds collect daily data on our bodily functions, sexual activities and sleeping habits, reporting the same to the health monitoring computer at our physician’s office, the highest levels of confidence and data protection much be assured.
And what about the Buddhist Precept not to abuse sexuality? Would a spouse engaged in a sexual affair with a totally life-like (but non-sentient) surrogate sex doll be breaking marriage vows, let alone Buddhist precepts? What of sex with virtual anime or animals? With virtual or animated minors? Precepts aside, should that be criminal behavior if nobody is actually harmed? We will have to decide (but at the very least, I hope the person gets treatment! Please see Chapter VI of this book). On the other hand, would fully sentient, self-aware, consenting cyborg partners break a precept (or the law) if agreeing to sex that is harmful to its partner? Might humanoid “snuff” bots act as homicide victims in someone’s sick storylines? Will there be sado-sexbots that whip their owners: S&M from IBM? Alas, future AI is bound to be as dark and sordid in corners as is the internet today.
On the other hand, future technology may make it easier for us fallible human beings to abide by the Precepts. and will develop many new varieties of pleasing drinks, drugs and foods not yet dreamed of, but without all the harmful aspects. We will only eat healthy with AI designed new foods, dished out in AI measured perfect portions: Zero calory ice cream, spinach that tastes like chocolate, celery that looks like toffee … future weight loss may be a piece of simulated cake!
AI will help us keep to the straight and narrow: Human beings may continue in the future to ingest intoxicants and mind-altering substances of various kinds, such as wine, whiskey, and cannabis. However, AI will monitor and regulate consumption for those of us lacking self-control (our smart liquor and pot cabinets will just refuse to open). Our automated factories will turn out inebriants with milder or well controlled effects, providing pleasures without extreme or dangerous levels of intoxication, and free of all addictive possibilities. The “synthehol” of Star Trek, with “highs” that can be cancelled at will, never leaving a hangover, will become real someday soon. The automated wine racks and pot dispensaries in our homes will be programmed not to stock any other kinds, and our internal bodily monitors will watch for signs of addiction or drunken anger that violate either a judge’s or doctor’s orders. Our driverless cars will get us home safely. Coupled with the fact that future human brains may no longer be capable of extreme violence, we will eliminate most of the alcohol-related car deaths, fist fights and killings that we see today.
Other precepts and general moral values will be important in our ideal AI programming. AI will be set to speak to us respectfully, except perhaps in emergencies or at other times of “intervention” when we need to be told off stridently and directly. Our smart smoke alarms and health-monitoring implants will let us know in no uncertain terms that we need to get out of a burning building or head to the hospital due to an impending heart attack (via automatic ambulance, already called and on its way.)
As to meat eating, AI and robots are already being used to run our factory farms where chickens, pigs, and cattle are kept in often horrible conditions, shitting and farting in vast quantities to pollute our waterways and change the climate. Yet human beings need proteins, and many of us seem hesitant about getting these just from beans. While some Buddhists are strict vegetarians, others (including the historical Buddha himself, if reports are true) have been traditionally tolerant of some meat eating. Nonetheless, simply from the standpoint of reducing the burden on our environment, a turn away from meat eating seems a very good thing. In my stays at Chinese and Japanese Buddhist monasteries, I often have been pleasantly surprised by the use in many dishes of delicious meat substitutes, made from tofu or mushrooms, that for all intents and purposes have the taste and texture of chicken, pork, or beef. The problem of meat consumption will be solved when our food industry perfects meat substitutes so delicious, nutritious and economical that carnivores cannot tell the difference.
The switch will be made when a faux filet mignon (made in a laboratory producing only the cow meat, not the whole cow that goes with it) tastes as good or better than the real thing, is twice as nutritious and especially healthy, yet can be produced and purchased for a fraction of the cost, pleasing sellers and shoppers alike. Or, perhaps, future generations will evolve/design bodies which are repulsed at the mere thought of the taste of meat, much preferring soy beans and quinoa instead, homo carnivorous now homo herbivorous. The future cyborg and AI chefs in our kitchens, who will do the actual ordering and cooking in our homes and restaurant galleys (except on those days when we might wish to cook for fun), will be expert in preparation, colorful presentation, and polite service. While some may argue that eating even pretend meat brings bad karma to the blood lusting Buddhist, I would argue that it is a much lesser offence: harmless faux-karma arising from a harmless faux-burger.
If we handle things wisely (another big “if”), I am optimistic about the future and the roles that AI will play:
Automated farms, driverless solar powered trucks, AI grocery stores and unmanned warehouses all over the world will bring a cornucopia to every door, nutritious and arriving on time. We will be free to enjoy generous (but low calorie) portions of tasty ‘beef-less roasts’ knowing that, around the world, there is nary a hungry child anywhere. AI architects and automated builders will build homes, AI pharmacies will dispense drugs cheaply, and so much more. Yes, there will be some bumps, but there are ways to soften the blow.
* * *
But all this leaves a question:
Driverless cars and meatless burgers are fine and good, but is there some ultimate height toward which our technicians may aim? Of course, building amazing cities filled with vibrant culture is a goal, and space travel will be another grand goal, the greatest of human tasks, heading to other planets and the stars (a topic we will look at more closing in a coming chapter). However, I am speaking now of an ultimate spiritual height, a summit of inner space toward which we may attain:
Specifically, can we build a Buddha?
( ... more on that next time ... )
tsuku.jpg
Comment