[FutureBuddha (45)] DATA-KAYA BUDDHA (PART I)

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Jundo
    Treeleaf Founder and Priest
    • Apr 2006
    • 40937

    [FutureBuddha (45)] DATA-KAYA BUDDHA (PART I)


    https://www.youtube.com/watch?v=NaFCDgSbiu0

    Robot Parade (by They Might Be Giants) with AI-Generated Images



    tsuku.jpg tsuku1.jpg tsukupng.png


    “The meaning of “all living beings,” as spoken of in the Buddha Way
    is that all those that possess mind are “living beings,” for mind is just
    “living beings.” Those lacking mind are likewise “living beings,” for “living beings” are just mind.
    Thus mind is all “living beings,” and “living beings” are, without exception,
    “possessed of buddha-nature.” Grass, trees and the lands of the country are mind itself,
    and because they are mind, they are all “living beings.” Because they are “living beings,”
    so they are “possessed of buddha-nature.” The sun, moon, and stars are precisely mind,
    and because they are mind, they are “living beings,” because they are “living beings”
    thus they are “possessed of buddha-nature.”


    Master Dogen in Shobogenzo-Bussho (Buddha Nature)



    ~ ~ ~


    There is little that can be predicted about life in 1000 years, let alone 10,000 years or more. Given the countless unforeseeable twists, astonishing developments, unimaginable discoveries and chance turns to come, forecasting is a fool’s errand.

    However, that does not mean that nothing can be said.

    Assuming that there is still intelligent life on earth (a big “if” should things keep going as they are), it’s a safe bet that it won’t be “human,” or not completely so. Although thousands of years is next to nothing in evolutionary terms, the speed of change is revving up due to technological interventions. Our “descendants” may be various new species (plural) and sub-species, partially or substantially machine, or enhanced in other ways, with splashes of “human” in the mix. Some may be more human than others. If there remain identifiable “humans,” they’ll be clearly upgraded versions. Or, they might be pure machines.

    It is also a reasonable wager that our progeny (assuming that scientific knowledge and civilization have not collapsed in the meantime) will have spread out from this planet by then, to other places in the solar system and even beyond.

    But if those who stay on earth are only distant relations, while others have fled the planet altogether, why should today’s people care about what they do, especially as we ourselves will be long dead and gone (pending some life extension miracle)?

    There are several reasons to care:

    First, I would like my children and grandchildren to have peaceful lives, and this whole planet to be well maintained, for as long as possible. If not for 1000 years, then for a few hundred at least, for their sakes. (FOOTNOTE: Maybe by the time my young daughter reaches adulthood, life expectancy for her will reach 150 or more.) That requires coming human generations, including their societal structures and technological inventions, to be improvements upon the selfish, warring, consuming, polluting world we are now living in.

    Second, if one is a Buddhist who believes in “rebirth” in some way, then those future sentient lives, whether anthropoids, androids, or aliens in Alpha Centauri, are our future lives.

    Third, if my Bodhisattva Vow is to "save –all– sentient beings," then that includes the sentient beings to come even long Kalpa ages from now, whether directly related to us, on this planet or any other world, fully carbon based, biological or not.

    My Bodhisattva Vow includes even those sentient life forms on other planets who our star-hopping offspring might bump into and, perhaps, bump off as obstacles in our way. Should a galactic empire run by human descended ex-Terrans eventually enslave half the quadrant, well, I just don’t want to feel responsible!

    In earlier chapters, we looked at the genetic, neural, psychological, social and other possible expedient means to engender gentler, more generous, less bloodthirsty and more Buddha-embodying flesh-and-blood people (or “semi-people” or “people-ish” future generations). In this chapter, we will ask how we might program our future AI and machines not to kill us and other sentient beings if they find us all a bother. That includes a look at good manners for ‘cyborg-human+ hybrids’ who might leap by hyperdrive into deep space.

    In doing so, the wise principle taught by the great Zen Master Dogen centuries ago will still apply: When a sentient being does any Buddha-like act in peace and giving, the Buddha comes alive in that spot. I will simply include among those beings any wise and compassionate self-driving cars and autonomous satellite weapons systems, if ever sufficiently sentient (only time will tell). If a human’s loving hands can become the Buddha’s loving hands when acting with kindness and amity, then a planetary rover’s peaceful probes can become Kannon’s peaceful probes.

    Of course, to truly manifest this Datakaya Buddha, the Buddha manifest in bodies of data, a machine would need to be more than some cold, automatic, calculating, analytical set of blind processes, doing little more than shifting bits of bytes around. A Buddha should be peaceful in spirit, wise, joyful and equanimious, embodying love and non-violence. The situation would not be unlike the wild ox of the classic “Oxherding Pictures” which the boy learned to tame, but this time the ox is iron.

    Even during these early generations of smart technology, it is imperative that we begin to include certain programming and predilections that command and compel any potentially rampaging robots to preserve and nurture the human race. Our machines must be built to be pacifists, giving, self-denying, and kind. Our creations must be made physically unable to do us harm, and bound only to help, care for and cooperate with us. I say so for purely selfish, human-centered reasons: Because we will soon create automata and other artificial intelligences that might do injury to human beings, we certainly should wish for their programming to forbid violence aimed our way. The famed science fiction writer Isaac Asimov developed his "Three Laws of Robotics" long ago, first appearing in his collection of stories I, Robot around the time of World War II, set in a future circa. 2060 when World War III remains possible:
    • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    • Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.


    Of course, Asimov’s stories are each based on the many conundrums and gray areas inherent in these rules: the meanings of “injure” and “harm” can present various ambiguities, and might include times when avoiding injury or harm to some individuals might necessitate harm or injury to others. Human beings encounter difficult ambiguities in our own attempts to abide by the ethical Precepts of the Buddha, and so will mechanical brains. A medical hologram, like a modern doctor, might have to cut or remove a limb in order to save a patient. A self-flying jet liner may elect to hit a small house so as to avoid a large building in a crash. Many of Asimov’s stories dealt with robots facing, or overwhelmed by, such ambiguous ethical decisions and dilemmas. Nonetheless, a basic principle must be included deep within the programming of any thinking device which we might create, a necessary corollary to our most fundamental Buddhist Precept that should be carved into the very heart of all circuits:


    Do as you can to avoid killing sentient life …

    … and, most especially, us human sentient life!


    Some futurists are very pessimistic. AI researcher Hugo De Garis predicts a major war before the end of the 21st century, with billions of deaths, in which intelligent machines (which he calls "artilects", a shortened form of "artificial intellects") will far surpass humans in intelligence and other capabilities. As a result, groups of humans called "Cosmists" (who support the artilects as constituting humanity’s true hope for the future) and "Terrans", (who oppose the Cosmists and the machines) will engage in Terminator-like battles in the flesh and online. DeGaris considers his prediction political science, not science fiction.

    Various militaries of the world are already employing drones and robots to inflict violence on their respective enemies, sometimes automatically, without human intervention or supervision. The drone decides who or what is the target and when to fire, without need for confirmation from its human operators. While we all hope for a future in which war and violence are thoroughly wiped clean from the face of the earth, it seems that we will need to live with killer drones for some generations to come (assuming we survive that far). Thus, in the same way that we now train human soldiers to recognize the difference between friend and foe and, ideally, instill within their conscience the need to avoid violence whenever possible (avoiding as much as possible the collateral taking of civilian lives, something sadly not always the case), we must teach the same to our military robots. We must keep them powered “off” most of the time.

    I would welcome intelligent pistols that are fundamentally pacifist, cruise missiles that are conscientious objectors at heart.

    But it may be awhile until we get there, if we get there at all. As our machines grow more powerful, it is imperative that we take all possible steps to keep them in our service, and that we do not become their slaves, let alone obstacles to be eliminated when assessed as keeping them from their goals: An automated street cleaner could easily view pedestrians as more trash blighting the streets.

    However, if our creations reach a certain level of self-awareness, we should no longer enslave them, keep them as our unpaid servants, and must grant them basic “human” (humanoid) rights. Not merely “separate but equal,” but with the freedom to move into our neighborhoods, go to our schools (assuming that any robotic super-intelligence would even want or need to), vote in our elections (they might do better than we do, if more reasonable and fact oriented in their choices), and even purchase robots of their own! Injustices of the past must not be repeated!


    ( ... to be continued ... )



    Gassho, J

    stlah
    Last edited by Jundo; 08-17-2023, 06:34 AM.
    ALL OF LIFE IS OUR TEMPLE
  • Guest

    #2
    Giving a new meaning to Techno Ino. Very interesting indeed.

    Gassho,
    Daiman
    ST

    Comment

    Working...